MySQL数据库在什么时候开始失去性能?
物理数据库大小重要吗? 记录的数量重要吗? 性能下降是线性的还是指数级的?
我有一个我相信是一个大的数据库,大约有1500万条记录,占用了近2GB。基于这些数字,我是否有任何动机清理数据,或者我是否可以允许它继续扩展几年?
MySQL数据库在什么时候开始失去性能?
物理数据库大小重要吗? 记录的数量重要吗? 性能下降是线性的还是指数级的?
我有一个我相信是一个大的数据库,大约有1500万条记录,占用了近2GB。基于这些数字,我是否有任何动机清理数据,或者我是否可以允许它继续扩展几年?
当前回答
查询性能主要取决于它需要扫描的记录数,索引在其中起着很高的作用,索引数据大小与行数和索引数成正比。
带有索引字段条件和完整值的查询通常会在1毫秒内返回,但是starts_with, in, Between,显然包含条件可能需要更多的时间和更多的记录来扫描。
此外,您还将面临DDL的许多维护问题,如ALTER, DROP将缓慢且难以处理更多的实时流量,即使是添加索引或新列。
一般来说,建议将数据库集群到所需的尽可能多的集群中(500GB将是一个通用的基准,正如其他人所说,它取决于许多因素,并且可以根据用例而变化),这样可以提供更好的隔离性,并提供扩展特定集群的独立性(更适合B2B情况)
其他回答
查询性能主要取决于它需要扫描的记录数,索引在其中起着很高的作用,索引数据大小与行数和索引数成正比。
带有索引字段条件和完整值的查询通常会在1毫秒内返回,但是starts_with, in, Between,显然包含条件可能需要更多的时间和更多的记录来扫描。
此外,您还将面临DDL的许多维护问题,如ALTER, DROP将缓慢且难以处理更多的实时流量,即使是添加索引或新列。
一般来说,建议将数据库集群到所需的尽可能多的集群中(500GB将是一个通用的基准,正如其他人所说,它取决于许多因素,并且可以根据用例而变化),这样可以提供更好的隔离性,并提供扩展特定集群的独立性(更适合B2B情况)
I once was called upon to look at a mysql that had "stopped working". I discovered that the DB files were residing on a Network Appliance filer mounted with NFS2 and with a maximum file size of 2GB. And sure enough, the table that had stopped accepting transactions was exactly 2GB on disk. But with regards to the performance curve I'm told that it was working like a champ right up until it didn't work at all! This experience always serves for me as a nice reminder that there're always dimensions above and below the one you naturally suspect.
数据库大小确实与字节数和表的行数有关。您将注意到light数据库和blob填充数据库之间的巨大性能差异。有一次我的应用程序卡住了,因为我把二进制图像放在字段中,而不是把图像保存在磁盘上的文件中,只把文件名放在数据库中。另一方面,迭代大量的行并不是免费的。
The database size does matter. If you have more than one table with more than a million records, then performance starts indeed to degrade. The number of records does of course affect the performance: MySQL can be slow with large tables. If you hit one million records you will get performance problems if the indices are not set right (for example no indices for fields in "WHERE statements" or "ON conditions" in joins). If you hit 10 million records, you will start to get performance problems even if you have all your indices right. Hardware upgrades - adding more memory and more processor power, especially memory - often help to reduce the most severe problems by increasing the performance again, at least to a certain degree. For example 37 signals went from 32 GB RAM to 128GB of RAM for the Basecamp database server.
I'm currently managing a MySQL database on Amazon's cloud infrastructure that has grown to 160 GB. Query performance is fine. What has become a nightmare is backups, restores, adding slaves, or anything else that deals with the whole dataset, or even DDL on large tables. Getting a clean import of a dump file has become problematic. In order to make the process stable enough to automate, various choices needed to be made to prioritize stability over performance. If we ever had to recover from a disaster using a SQL backup, we'd be down for days.
Horizontally scaling SQL is also pretty painful, and in most cases leads to using it in ways you probably did not intend when you chose to put your data in SQL in the first place. Shards, read slaves, multi-master, et al, they are all really shitty solutions that add complexity to everything you ever do with the DB, and not one of them solves the problem; only mitigates it in some ways. I would strongly suggest looking at moving some of your data out of MySQL (or really any SQL) when you start approaching a dataset of a size where these types of things become an issue.
更新:几年后,我们的数据集已经增长到大约800 GiB。此外,我们还有一个200+ GiB的表和其他一些50-100 GiB的表。我之前说的都成立。它的性能仍然很好,但运行完整数据集操作的问题变得更糟了。