我有一个innoDB表记录在线用户。它会在用户每次刷新页面时进行更新,以跟踪用户正在访问哪些页面以及他们最后一次访问网站的日期。然后,我有一个每15分钟运行一次的cron来删除旧记录。
我在尝试锁定时发现了一个“僵局”;try restart transaction'昨晚大约5分钟,它似乎是在运行insert到这个表时。有人能建议如何避免这个错误吗?
=== edit ===
下面是正在运行的查询:
第一次实地考察:
INSERT INTO onlineusers SET
ip = 123.456.789.123,
datetime = now(),
userid = 321,
page = '/thispage',
area = 'thisarea',
type = 3
在每个页面刷新:
UPDATE onlineusers SET
ips = 123.456.789.123,
datetime = now(),
userid = 321,
page = '/thispage',
area = 'thisarea',
type = 3
WHERE id = 888
每15分钟Cron一次:
DELETE FROM onlineusers WHERE datetime <= now() - INTERVAL 900 SECOND
然后,它会进行一些计数来记录一些统计数据(例如:在线成员,在线访客)。
当两个事务相互等待以获得锁时,就会发生死锁。例子:
Tx 1:锁定A,然后锁定B
Tx 2:锁定B,然后是A
关于死锁有许多问题和答案。每次插入/更新/或删除一行时,都会获得一个锁。为了避免死锁,必须确保并发事务不会按照可能导致死锁的顺序更新行。一般来说,即使在不同的事务中,也尽量以相同的顺序获取锁(例如,总是先获取表A,然后获取表B)。
Another reason for deadlock in database can be missing indexes. When a row is inserted/update/delete, the database needs to check the relational constraints, that is, make sure the relations are consistent. To do so, the database needs to check the foreign keys in the related tables. It might result in other lock being acquired than the row that is modified. Be sure then to always have index on the foreign keys (and of course primary keys), otherwise it could result in a table lock instead of a row lock. If table lock happen, the lock contention is higher and the likelihood of deadlock increases.
一个可以帮助解决大多数死锁的简单技巧是按特定顺序对操作进行排序。
当两个事务试图以相反的顺序锁定两个锁时,就会出现死锁,例如:
连接1:锁键(1)、锁键(2);
连接2:锁键(2)、锁键(1);
如果两者同时运行,连接1将锁定密钥(1),连接2将锁定密钥(2),并且每个连接将等待另一个释放密钥->死锁。
现在,如果你改变了你的查询,这样连接就会以相同的顺序锁定键,即:
连接1:锁键(1)、锁键(2);
连接2:锁键(1)、锁键(2);
这样就不可能出现僵局。
这就是我的建议:
确保除了delete语句外,没有其他查询一次锁定多个键的访问权。如果你这样做了(我怀疑你这样做了),将它们在(k1,k2,..kn)中的WHERE按升序排列。
修正你的delete语句以升序工作:
改变
DELETE FROM onlineusers
WHERE datetime <= now() - INTERVAL 900 SECOND
To
DELETE FROM onlineusers
WHERE id IN (
SELECT id FROM onlineusers
WHERE datetime <= now() - INTERVAL 900 SECOND
ORDER BY id
) u;
另一件需要记住的事情是,MySQL文档建议在出现死锁的情况下,客户端应该自动重试。您可以将此逻辑添加到客户端代码中。(比如,在放弃之前,在这个特定的错误上重试3次)。
您可以尝试通过首先将要删除的每一行的键插入到临时表(如下面的伪代码)来操作该删除作业
create temporary table deletetemp (userid int);
insert into deletetemp (userid)
select userid from onlineusers where datetime <= now - interval 900 second;
delete from onlineusers where userid in (select userid from deletetemp);
像这样分解它的效率较低,但它避免了在删除过程中持有key-range锁的需要。
另外,修改选择查询以添加where子句,排除时间超过900秒的行。这避免了对cron作业的依赖,并允许您重新安排它以减少运行频率。
Theory about the deadlocks: I don't have a lot of background in MySQL but here goes... The delete is going to hold a key-range lock for datetime, to prevent rows matching its where clause from being added in the middle of the transaction, and as it finds rows to delete it will attempt to acquire a lock on each page it is modifying. The insert is going to acquire a lock on the page it is inserting into, and then attempt to acquire the key lock. Normally the insert will wait patiently for that key lock to open up but this will deadlock if the delete tries to lock the same page the insert is using because thedelete needs that page lock and the insert needs that key lock. This doesn't seem right for inserts though, the delete and insert are using datetime ranges that don't overlap so maybe something else is going on.
http://dev.mysql.com/doc/refman/5.1/en/innodb-next-key-locking.html
Cron很危险。如果一个cron实例未能在下一个到期之前完成,它们可能会相互争斗。
最好有一个持续运行的作业,删除一些行,休眠一些行,然后重复。
此外,INDEX(datetime)对于避免死锁非常重要。
但是,如果datetime测试包含超过20%的表,则DELETE将执行表扫描。更频繁地删除较小的数据块是一种变通办法。
使用较小块的另一个原因是锁定更少的行。
底线:
指数(datetime)
持续运行任务——删除,休眠一分钟,重复。
要确保上述任务没有终止,请使用一个cron作业,其唯一目的是在失败时重新启动它。
其他删除技术:http://mysql.rjweb.org/doc.php/deletebig
@Omry Yadan的答案(https://stackoverflow.com/a/2423921/1810962)可以用ORDER by简化。
改变
DELETE FROM onlineusers
WHERE datetime <= now() - INTERVAL 900 SECOND
to
DELETE FROM onlineusers
WHERE datetime <= now() - INTERVAL 900 SECOND
ORDER BY ID
以保持您删除项目的顺序一致。此外,如果在单个事务中执行多个插入,请确保它们也始终按id排序。
根据mysql delete文档:
如果指定了ORDER BY子句,则按指定的顺序删除行。
你可以在这里找到参考资料:https://dev.mysql.com/doc/refman/8.0/en/delete.html