我有一个innoDB表记录在线用户。它会在用户每次刷新页面时进行更新,以跟踪用户正在访问哪些页面以及他们最后一次访问网站的日期。然后,我有一个每15分钟运行一次的cron来删除旧记录。
我在尝试锁定时发现了一个“僵局”;try restart transaction'昨晚大约5分钟,它似乎是在运行insert到这个表时。有人能建议如何避免这个错误吗?
=== edit ===
下面是正在运行的查询:
第一次实地考察:
INSERT INTO onlineusers SET
ip = 123.456.789.123,
datetime = now(),
userid = 321,
page = '/thispage',
area = 'thisarea',
type = 3
在每个页面刷新:
UPDATE onlineusers SET
ips = 123.456.789.123,
datetime = now(),
userid = 321,
page = '/thispage',
area = 'thisarea',
type = 3
WHERE id = 888
每15分钟Cron一次:
DELETE FROM onlineusers WHERE datetime <= now() - INTERVAL 900 SECOND
然后,它会进行一些计数来记录一些统计数据(例如:在线成员,在线访客)。
当两个事务相互等待以获得锁时,就会发生死锁。例子:
Tx 1:锁定A,然后锁定B
Tx 2:锁定B,然后是A
关于死锁有许多问题和答案。每次插入/更新/或删除一行时,都会获得一个锁。为了避免死锁,必须确保并发事务不会按照可能导致死锁的顺序更新行。一般来说,即使在不同的事务中,也尽量以相同的顺序获取锁(例如,总是先获取表A,然后获取表B)。
Another reason for deadlock in database can be missing indexes. When a row is inserted/update/delete, the database needs to check the relational constraints, that is, make sure the relations are consistent. To do so, the database needs to check the foreign keys in the related tables. It might result in other lock being acquired than the row that is modified. Be sure then to always have index on the foreign keys (and of course primary keys), otherwise it could result in a table lock instead of a row lock. If table lock happen, the lock contention is higher and the likelihood of deadlock increases.
您可以尝试通过首先将要删除的每一行的键插入到临时表(如下面的伪代码)来操作该删除作业
create temporary table deletetemp (userid int);
insert into deletetemp (userid)
select userid from onlineusers where datetime <= now - interval 900 second;
delete from onlineusers where userid in (select userid from deletetemp);
像这样分解它的效率较低,但它避免了在删除过程中持有key-range锁的需要。
另外,修改选择查询以添加where子句,排除时间超过900秒的行。这避免了对cron作业的依赖,并允许您重新安排它以减少运行频率。
Theory about the deadlocks: I don't have a lot of background in MySQL but here goes... The delete is going to hold a key-range lock for datetime, to prevent rows matching its where clause from being added in the middle of the transaction, and as it finds rows to delete it will attempt to acquire a lock on each page it is modifying. The insert is going to acquire a lock on the page it is inserting into, and then attempt to acquire the key lock. Normally the insert will wait patiently for that key lock to open up but this will deadlock if the delete tries to lock the same page the insert is using because thedelete needs that page lock and the insert needs that key lock. This doesn't seem right for inserts though, the delete and insert are using datetime ranges that don't overlap so maybe something else is going on.
http://dev.mysql.com/doc/refman/5.1/en/innodb-next-key-locking.html
@Omry Yadan的答案(https://stackoverflow.com/a/2423921/1810962)可以用ORDER by简化。
改变
DELETE FROM onlineusers
WHERE datetime <= now() - INTERVAL 900 SECOND
to
DELETE FROM onlineusers
WHERE datetime <= now() - INTERVAL 900 SECOND
ORDER BY ID
以保持您删除项目的顺序一致。此外,如果在单个事务中执行多个插入,请确保它们也始终按id排序。
根据mysql delete文档:
如果指定了ORDER BY子句,则按指定的顺序删除行。
你可以在这里找到参考资料:https://dev.mysql.com/doc/refman/8.0/en/delete.html
当两个事务相互等待以获得锁时,就会发生死锁。例子:
Tx 1:锁定A,然后锁定B
Tx 2:锁定B,然后是A
关于死锁有许多问题和答案。每次插入/更新/或删除一行时,都会获得一个锁。为了避免死锁,必须确保并发事务不会按照可能导致死锁的顺序更新行。一般来说,即使在不同的事务中,也尽量以相同的顺序获取锁(例如,总是先获取表A,然后获取表B)。
Another reason for deadlock in database can be missing indexes. When a row is inserted/update/delete, the database needs to check the relational constraints, that is, make sure the relations are consistent. To do so, the database needs to check the foreign keys in the related tables. It might result in other lock being acquired than the row that is modified. Be sure then to always have index on the foreign keys (and of course primary keys), otherwise it could result in a table lock instead of a row lock. If table lock happen, the lock contention is higher and the likelihood of deadlock increases.