有什么快速而简单的方法可以确保在给定时间内只有一个shell脚本实例在运行?
当前回答
你需要一个原子操作,比如flock,否则最终会失败。
但是如果没有羊群,该怎么办呢?这是mkdir。这也是一个原子操作。只有一个进程会成功执行mkdir,其他进程都会失败。
所以代码是:
if mkdir /var/lock/.myscript.exclusivelock
then
# do stuff
:
rmdir /var/lock/.myscript.exclusivelock
fi
你需要处理陈旧的锁,否则崩溃后你的脚本将永远不会再次运行。
其他回答
你需要一个原子操作,比如flock,否则最终会失败。
但是如果没有羊群,该怎么办呢?这是mkdir。这也是一个原子操作。只有一个进程会成功执行mkdir,其他进程都会失败。
所以代码是:
if mkdir /var/lock/.myscript.exclusivelock
then
# do stuff
:
rmdir /var/lock/.myscript.exclusivelock
fi
你需要处理陈旧的锁,否则崩溃后你的脚本将永远不会再次运行。
PID和锁文件绝对是最可靠的。当您尝试运行程序时,它可以检查锁文件,如果它存在,它可以使用ps查看进程是否仍在运行。如果不是,脚本可以启动,将锁文件中的PID更新为自己的PID。
又快又脏?
#!/bin/sh
if [ -f sometempfile ]
echo "Already running... will now terminate."
exit
else
touch sometempfile
fi
..do what you want here..
rm sometempfile
我对现有的答案有以下问题:
Some answers try to clean up lock files and then having to deal with stale lock files caused by e.g. sudden crash/reboot. IMO that is unnecessarily complicated. Let lock files stay. Some answers use script file itself $0 or $BASH_SOURCE for locking often referring to examples from man flock. This fails when script is replaced due to update or edit causing next run to open and obtain lock on the new script file even though another instance holding a lock on the removed file is still running. Few answers use a fixed file descriptor. This is not ideal. I do not want to rely on how this will behave e.g. opening lock file fails but gets mishandled and attempts to lock on unrelated file descriptor inherited from parent process. Another fail case is injecting locking wrapper for a 3rd party binary that does not handle locking itself but fixed file descriptors can interfere with file descriptor passing to child processes. I reject answers using process lookup for already running script name. There are several reasons for it, such as but not limited to reliability/atomicity, parsing output, and having script that does several related functions some of which do not require locking.
这个答案是:
rely on flock because it gets kernel to provide locking ... provided lock file is created atomically and not replaced. assume and rely on lock file being stored on the local filesystem as opposed to NFS. change lock file presence to NOT mean anything about a running instance. Its role is purely to prevent two concurrent instances creating file with same name and replacing another's copy. Lock file does not get deleted, it gets left behind and can survive across reboots. The locking is indicated via flock not via lock file presence. assume bash shell, as tagged by the question.
它不是一个联机程序,但是没有注释和错误消息,它足够小:
#!/bin/bash
LOCKFILE=/var/lock/TODO
set -o noclobber
exec {lockfd}<> "${LOCKFILE}" || exit 1
set +o noclobber # depends on what you need
flock --exclusive --nonblock ${lockfd} || exit 1
但我更喜欢注释和错误消息:
#!/bin/bash
# TODO Set a lock file name
LOCKFILE=/var/lock/myprogram.lock
# Set noclobber option to ensure lock file is not REPLACED.
set -o noclobber
# Open lock file for R+W on a new file descriptor
# and assign the new file descriptor to "lockfd" variable.
# This does NOT obtain a lock but ensures the file exists and opens it.
exec {lockfd}<> "${LOCKFILE}" || {
echo "pid=$$ failed to open LOCKFILE='${LOCKFILE}'" 1>&2
exit 1
}
# TODO!!!! undo/set the desired noclobber value for the remainder of the script
set +o noclobber
# Lock on the allocated file descriptor or fail
# Adjust flock options e.g. --noblock as needed
flock --exclusive --nonblock ${lockfd} || {
echo "pid=$$ failed to obtain lock fd='${lockfd}' LOCKFILE='${LOCKFILE}'" 1>&2
exit 1
}
# DO work here
echo "pid=$$ obtained exclusive lock fd='${lockfd}' LOCKFILE='${LOCKFILE}'"
# Can unlock after critical section and do more work after unlocking
#flock -u ${lockfd};
# if unlocking then might as well close lockfd too
#exec {lockfd}<&-
实际上,尽管bmdhacks的答案几乎很好,但在第一次检查锁文件之后和在写入它之前,第二个脚本有轻微的机会运行。它们都将写入锁文件,它们都将运行。以下是如何确保它有效的方法:
lockfile=/var/lock/myscript.lock
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null ; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
else
# or you can decide to skip the "else" part if you want
echo "Another instance is already running!"
fi
如果文件已经存在,noclobber选项将确保重定向命令将失败。因此,重定向命令实际上是原子的—您用一个命令写入和检查文件。你不需要在文件的末尾删除锁文件-它会被陷阱删除。我希望这对以后读到它的人有帮助。
另外,我没有看到Mikel已经正确地回答了这个问题,尽管他没有包括trap命令,以减少使用Ctrl-C停止脚本后留下锁文件的机会。这就是完整的解