我正在从事一个涉及大量数据库写入的项目(70%的插入和30%的读取)。这个比率还包括我认为是一个读一个写的更新。读取可能是脏的(例如,在读取时我不需要100%准确的信息)。 该任务每小时将处理超过100万个数据库事务。
我在网上读了一堆关于MyISAM和InnoDB之间区别的东西,对于我将用于这个任务的特定数据库/表来说,MyISAM似乎是显而易见的选择。从我看来,InnoDB在需要事务时是很好的,因为它支持行级锁。
有人有这种负载(或更高)的经验吗?MyISAM是正确的选择吗?
我正在从事一个涉及大量数据库写入的项目(70%的插入和30%的读取)。这个比率还包括我认为是一个读一个写的更新。读取可能是脏的(例如,在读取时我不需要100%准确的信息)。 该任务每小时将处理超过100万个数据库事务。
我在网上读了一堆关于MyISAM和InnoDB之间区别的东西,对于我将用于这个任务的特定数据库/表来说,MyISAM似乎是显而易见的选择。从我看来,InnoDB在需要事务时是很好的,因为它支持行级锁。
有人有这种负载(或更高)的经验吗?MyISAM是正确的选择吗?
当前回答
对于一个有更多写和读的负载,你将受益于InnoDB。因为InnoDB提供的是行锁而不是表锁,所以你的select可以是并发的,不仅仅是彼此之间的select,还有许多insert。但是,除非你打算使用SQL事务,否则将InnoDB提交刷新设置为2 (innodb_flush_log_at_trx_commit)。这将为您提供大量原始性能,否则将表从MyISAM转移到InnoDB时会损失这些性能。
Also, consider adding replication. This gives you some read scaling and since you stated your reads don't have to be up-to-date, you can let the replication fall behind a little. Just be sure that it can catch up under anything but the heaviest traffic or it will always be behind and will never catch up. If you go this way, however, I strongly recommend you isolate reading from the slaves and replication lag management to your database handler. It is so much simpler if the application code does not know about this.
最后,要注意不同的表负载。您不会在所有表上都有相同的读/写比率。一些接近100%读取的小表可以负担得起MyISAM。同样地,如果你有一些接近100%写的表,你可能会受益于INSERT DELAYED,但这只在MyISAM中支持(对于InnoDB表,DELAYED子句会被忽略)。
但基准是肯定的。
其他回答
为了增加广泛的选择,这里涵盖了两个发动机之间的机械差异,我提出了一个经验速度比较研究。
就纯粹的速度而言,MyISAM并不总是比InnoDB快,但根据我的经验,在pure READ工作环境中,MyISAM往往快2.0-2.5倍。显然,这并不适用于所有环境——正如其他人所写的那样,MyISAM缺少事务和外键之类的东西。
我在下面做了一些基准测试——我使用python进行循环,使用timeit库进行时间比较。出于兴趣,我还包括了内存引擎,这提供了最好的性能,尽管它只适用于较小的表(当您超过MySQL内存限制时,您会不断遇到表'tbl'已满)。我研究的四种选择类型是:
香草选择 计数 有条件的选择 索引和非索引子选择
首先,我使用以下SQL创建了三个表
CREATE TABLE
data_interrogation.test_table_myisam
(
index_col BIGINT NOT NULL AUTO_INCREMENT,
value1 DOUBLE,
value2 DOUBLE,
value3 DOUBLE,
value4 DOUBLE,
PRIMARY KEY (index_col)
)
ENGINE=MyISAM DEFAULT CHARSET=utf8
在第二和第三个表中用“MyISAM”替换“InnoDB”和“memory”。
1)香草选择
查询:SELECT * FROM tbl WHERE index_col = xx
结果:画
它们的速度基本上是相同的,并且正如预期的那样,与要选择的列数成线性关系。InnoDB似乎比MyISAM快一点,但这真的是微不足道的。
代码:
import timeit
import MySQLdb
import MySQLdb.cursors
import random
from random import randint
db = MySQLdb.connect(host="...", user="...", passwd="...", db="...", cursorclass=MySQLdb.cursors.DictCursor)
cur = db.cursor()
lengthOfTable = 100000
# Fill up the tables with random data
for x in xrange(lengthOfTable):
rand1 = random.random()
rand2 = random.random()
rand3 = random.random()
rand4 = random.random()
insertString = "INSERT INTO test_table_innodb (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
insertString2 = "INSERT INTO test_table_myisam (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
insertString3 = "INSERT INTO test_table_memory (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
cur.execute(insertString)
cur.execute(insertString2)
cur.execute(insertString3)
db.commit()
# Define a function to pull a certain number of records from these tables
def selectRandomRecords(testTable,numberOfRecords):
for x in xrange(numberOfRecords):
rand1 = randint(0,lengthOfTable)
selectString = "SELECT * FROM " + testTable + " WHERE index_col = " + str(rand1)
cur.execute(selectString)
setupString = "from __main__ import selectRandomRecords"
# Test time taken using timeit
myisam_times = []
innodb_times = []
memory_times = []
for theLength in [3,10,30,100,300,1000,3000,10000]:
innodb_times.append( timeit.timeit('selectRandomRecords("test_table_innodb",' + str(theLength) + ')', number=100, setup=setupString) )
myisam_times.append( timeit.timeit('selectRandomRecords("test_table_myisam",' + str(theLength) + ')', number=100, setup=setupString) )
memory_times.append( timeit.timeit('selectRandomRecords("test_table_memory",' + str(theLength) + ')', number=100, setup=setupString) )
2)计算
查询:SELECT count(*) FROM tbl
结果:MyISAM获胜
这个说明了MyISAM和InnoDB之间的一个很大的不同——MyISAM(和内存)跟踪表中的记录数量,所以这个事务是快速的,O(1)。在我调查的范围内,InnoDB计数所需的时间随着表的大小超线性增加。我怀疑在实践中观察到的许多MyISAM查询的加速都是由于类似的效果。
代码:
myisam_times = []
innodb_times = []
memory_times = []
# Define a function to count the records
def countRecords(testTable):
selectString = "SELECT count(*) FROM " + testTable
cur.execute(selectString)
setupString = "from __main__ import countRecords"
# Truncate the tables and re-fill with a set amount of data
for theLength in [3,10,30,100,300,1000,3000,10000,30000,100000]:
truncateString = "TRUNCATE test_table_innodb"
truncateString2 = "TRUNCATE test_table_myisam"
truncateString3 = "TRUNCATE test_table_memory"
cur.execute(truncateString)
cur.execute(truncateString2)
cur.execute(truncateString3)
for x in xrange(theLength):
rand1 = random.random()
rand2 = random.random()
rand3 = random.random()
rand4 = random.random()
insertString = "INSERT INTO test_table_innodb (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
insertString2 = "INSERT INTO test_table_myisam (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
insertString3 = "INSERT INTO test_table_memory (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
cur.execute(insertString)
cur.execute(insertString2)
cur.execute(insertString3)
db.commit()
# Count and time the query
innodb_times.append( timeit.timeit('countRecords("test_table_innodb")', number=100, setup=setupString) )
myisam_times.append( timeit.timeit('countRecords("test_table_myisam")', number=100, setup=setupString) )
memory_times.append( timeit.timeit('countRecords("test_table_memory")', number=100, setup=setupString) )
3)有条件选择
查询:SELECT * FROM tbl WHERE value1<0.5 AND value2<0.5 AND value3<0.5 AND value4<0.5
结果:MyISAM获胜
在这里,MyISAM和内存的性能大致相同,对于更大的表,它比InnoDB高出50%左右。在这类查询中,MyISAM的好处似乎得到了最大化。
代码:
myisam_times = []
innodb_times = []
memory_times = []
# Define a function to perform conditional selects
def conditionalSelect(testTable):
selectString = "SELECT * FROM " + testTable + " WHERE value1 < 0.5 AND value2 < 0.5 AND value3 < 0.5 AND value4 < 0.5"
cur.execute(selectString)
setupString = "from __main__ import conditionalSelect"
# Truncate the tables and re-fill with a set amount of data
for theLength in [3,10,30,100,300,1000,3000,10000,30000,100000]:
truncateString = "TRUNCATE test_table_innodb"
truncateString2 = "TRUNCATE test_table_myisam"
truncateString3 = "TRUNCATE test_table_memory"
cur.execute(truncateString)
cur.execute(truncateString2)
cur.execute(truncateString3)
for x in xrange(theLength):
rand1 = random.random()
rand2 = random.random()
rand3 = random.random()
rand4 = random.random()
insertString = "INSERT INTO test_table_innodb (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
insertString2 = "INSERT INTO test_table_myisam (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
insertString3 = "INSERT INTO test_table_memory (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
cur.execute(insertString)
cur.execute(insertString2)
cur.execute(insertString3)
db.commit()
# Count and time the query
innodb_times.append( timeit.timeit('conditionalSelect("test_table_innodb")', number=100, setup=setupString) )
myisam_times.append( timeit.timeit('conditionalSelect("test_table_myisam")', number=100, setup=setupString) )
memory_times.append( timeit.timeit('conditionalSelect("test_table_memory")', number=100, setup=setupString) )
4)子
结果:InnoDB胜出
对于这个查询,我为子选择创建了一组额外的表。每个都是简单的两列bigint,一列有主键索引,另一列没有任何索引。由于表的大小很大,我没有测试内存引擎。SQL表创建命令为
CREATE TABLE
subselect_myisam
(
index_col bigint NOT NULL,
non_index_col bigint,
PRIMARY KEY (index_col)
)
ENGINE=MyISAM DEFAULT CHARSET=utf8;
在第二个表中,'MyISAM'再次替换'InnoDB'。
在这个查询中,我将选择表的大小保留为1000000,而是改变子选择列的大小。
在这一点上,InnoDB很容易获胜。在我们得到一个合理的大小表后,两个引擎都线性缩放子选择的大小。索引加快了MyISAM命令的速度,但有趣的是,它对InnoDB的速度几乎没有影响。 subSelect.png
代码:
myisam_times = []
innodb_times = []
myisam_times_2 = []
innodb_times_2 = []
def subSelectRecordsIndexed(testTable,testSubSelect):
selectString = "SELECT * FROM " + testTable + " WHERE index_col in ( SELECT index_col FROM " + testSubSelect + " )"
cur.execute(selectString)
setupString = "from __main__ import subSelectRecordsIndexed"
def subSelectRecordsNotIndexed(testTable,testSubSelect):
selectString = "SELECT * FROM " + testTable + " WHERE index_col in ( SELECT non_index_col FROM " + testSubSelect + " )"
cur.execute(selectString)
setupString2 = "from __main__ import subSelectRecordsNotIndexed"
# Truncate the old tables, and re-fill with 1000000 records
truncateString = "TRUNCATE test_table_innodb"
truncateString2 = "TRUNCATE test_table_myisam"
cur.execute(truncateString)
cur.execute(truncateString2)
lengthOfTable = 1000000
# Fill up the tables with random data
for x in xrange(lengthOfTable):
rand1 = random.random()
rand2 = random.random()
rand3 = random.random()
rand4 = random.random()
insertString = "INSERT INTO test_table_innodb (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
insertString2 = "INSERT INTO test_table_myisam (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")"
cur.execute(insertString)
cur.execute(insertString2)
for theLength in [3,10,30,100,300,1000,3000,10000,30000,100000]:
truncateString = "TRUNCATE subselect_innodb"
truncateString2 = "TRUNCATE subselect_myisam"
cur.execute(truncateString)
cur.execute(truncateString2)
# For each length, empty the table and re-fill it with random data
rand_sample = sorted(random.sample(xrange(lengthOfTable), theLength))
rand_sample_2 = random.sample(xrange(lengthOfTable), theLength)
for (the_value_1,the_value_2) in zip(rand_sample,rand_sample_2):
insertString = "INSERT INTO subselect_innodb (index_col,non_index_col) VALUES (" + str(the_value_1) + "," + str(the_value_2) + ")"
insertString2 = "INSERT INTO subselect_myisam (index_col,non_index_col) VALUES (" + str(the_value_1) + "," + str(the_value_2) + ")"
cur.execute(insertString)
cur.execute(insertString2)
db.commit()
# Finally, time the queries
innodb_times.append( timeit.timeit('subSelectRecordsIndexed("test_table_innodb","subselect_innodb")', number=100, setup=setupString) )
myisam_times.append( timeit.timeit('subSelectRecordsIndexed("test_table_myisam","subselect_myisam")', number=100, setup=setupString) )
innodb_times_2.append( timeit.timeit('subSelectRecordsNotIndexed("test_table_innodb","subselect_innodb")', number=100, setup=setupString2) )
myisam_times_2.append( timeit.timeit('subSelectRecordsNotIndexed("test_table_myisam","subselect_myisam")', number=100, setup=setupString2) )
我认为所有这些的关键信息是,如果你真的关心速度,你需要对你正在执行的查询进行基准测试,而不是假设哪个引擎更适合。
底线:如果你离线工作,选择大量的数据,MyISAM可能会给你更好的(好得多)速度。
在某些情况下,MyISAM比InnoDB的效率要高得多:离线操作大型数据转储时(因为表锁)。
示例:我正在从NOAA转换一个csv文件(15M条记录),它使用VARCHAR字段作为键。InnoDB的运行时间很长,即使有大量的内存可用。
这是CSV的一个例子(第一个和第三个字段是键)。
USC00178998,20130101,TMAX,-22,,,7,0700
USC00178998,20130101,TMIN,-117,,,7,0700
USC00178998,20130101,TOBS,-28,,,7,0700
USC00178998,20130101,PRCP,0,T,,7,0700
USC00178998,20130101,SNOW,0,T,,7,
因为我需要做的是运行观察到的天气现象的批量离线更新,我使用MyISAM表接收数据,并在键上运行join,这样我就可以清理传入的文件,并将VARCHAR字段替换为INT键(与原始VARCHAR值存储的外部表相关)。
这个问题和大部分答案都已经过时了。
是的,MyISAM比InnoDB快是无稽之谈。注意问题的日期:2008年;现在已经过去了近十年。从那时起,InnoDB在性能上取得了显著的进步。
戏剧性的图表是MyISAM获胜的一种情况:没有where子句的COUNT(*)。但这真的是你花时间做的事情吗?
如果你运行并发测试,InnoDB很可能会赢,即使是对MEMORY。
如果在对select进行基准测试时执行任何写入操作,MyISAM和MEMORY可能会因为表级锁定而丢失。
事实上,Oracle非常确定InnoDB更好,以至于他们几乎从8.0中删除了MyISAM。
这个问题写于5.1的早期。从那时起,这些主要版本被标记为“一般可用性”:
2010: 5.5(。12月8日) 2013: 5.6(。2月10日) 2015年:5.7(。10月9日) 2018年:8.0(。四月十一日)
底线:不要使用MyISAM
每个应用程序在使用数据库时都有自己的性能配置文件,并且可能会随着时间的推移而改变。
你能做的最好的事情就是测试你的选择。在MyISAM和InnoDB之间切换是很简单的,所以加载一些测试数据并在你的站点上启动jmeter,看看会发生什么。
如果它是70%的插入和30%的读取,那么它更像是在InnoDB端。