MySQL数据库在什么时候开始失去性能?

物理数据库大小重要吗? 记录的数量重要吗? 性能下降是线性的还是指数级的?

我有一个我相信是一个大的数据库,大约有1500万条记录,占用了近2GB。基于这些数字,我是否有任何动机清理数据,或者我是否可以允许它继续扩展几年?


当前回答

The database size does matter. If you have more than one table with more than a million records, then performance starts indeed to degrade. The number of records does of course affect the performance: MySQL can be slow with large tables. If you hit one million records you will get performance problems if the indices are not set right (for example no indices for fields in "WHERE statements" or "ON conditions" in joins). If you hit 10 million records, you will start to get performance problems even if you have all your indices right. Hardware upgrades - adding more memory and more processor power, especially memory - often help to reduce the most severe problems by increasing the performance again, at least to a certain degree. For example 37 signals went from 32 GB RAM to 128GB of RAM for the Basecamp database server.

其他回答

谈论“数据库性能”有点毫无意义,“查询性能”在这里是一个更好的术语。答案是:这取决于查询,它所操作的数据,索引,硬件等。您可以了解将要扫描多少行,以及使用EXPLAIN语法将使用哪些索引。

2GB并不算真正的“大”数据库——它更像是一个中等大小的数据库。

The database size does matter. If you have more than one table with more than a million records, then performance starts indeed to degrade. The number of records does of course affect the performance: MySQL can be slow with large tables. If you hit one million records you will get performance problems if the indices are not set right (for example no indices for fields in "WHERE statements" or "ON conditions" in joins). If you hit 10 million records, you will start to get performance problems even if you have all your indices right. Hardware upgrades - adding more memory and more processor power, especially memory - often help to reduce the most severe problems by increasing the performance again, at least to a certain degree. For example 37 signals went from 32 GB RAM to 128GB of RAM for the Basecamp database server.

I'm currently managing a MySQL database on Amazon's cloud infrastructure that has grown to 160 GB. Query performance is fine. What has become a nightmare is backups, restores, adding slaves, or anything else that deals with the whole dataset, or even DDL on large tables. Getting a clean import of a dump file has become problematic. In order to make the process stable enough to automate, various choices needed to be made to prioritize stability over performance. If we ever had to recover from a disaster using a SQL backup, we'd be down for days.

Horizontally scaling SQL is also pretty painful, and in most cases leads to using it in ways you probably did not intend when you chose to put your data in SQL in the first place. Shards, read slaves, multi-master, et al, they are all really shitty solutions that add complexity to everything you ever do with the DB, and not one of them solves the problem; only mitigates it in some ways. I would strongly suggest looking at moving some of your data out of MySQL (or really any SQL) when you start approaching a dataset of a size where these types of things become an issue.

更新:几年后,我们的数据集已经增长到大约800 GiB。此外,我们还有一个200+ GiB的表和其他一些50-100 GiB的表。我之前说的都成立。它的性能仍然很好,但运行完整数据集操作的问题变得更糟了。

我将首先关注您的索引,然后让服务器管理员查看您的操作系统,如果所有这些都没有帮助,可能是时候进行主/从配置了。

这是真的。另一个通常有效的方法是减少重复处理的数据量。如果你有“旧数据”和“新数据”,并且99%的查询都使用新数据,只需将所有旧数据移动到另一个表中-并且不要查看它;)

->看看分区。

物理数据库大小无关紧要。记录的数量并不重要。

In my experience the biggest problem that you are going to run in to is not size, but the number of queries you can handle at a time. Most likely you are going to have to move to a master/slave configuration so that the read queries can run against the slaves and the write queries run against the master. However if you are not ready for this yet, you can always tweak your indexes for the queries you are running to speed up the response times. Also there is a lot of tweaking you can do to the network stack and kernel in Linux that will help.

我的内存达到了10GB,只有中等数量的连接,它处理请求还不错。

我将首先关注您的索引,然后让服务器管理员查看您的操作系统,如果所有这些都没有帮助,那么可能是时候实现主/从配置了。