有什么快速而简单的方法可以确保在给定时间内只有一个shell脚本实例在运行?
当前回答
你需要一个原子操作,比如flock,否则最终会失败。
但是如果没有羊群,该怎么办呢?这是mkdir。这也是一个原子操作。只有一个进程会成功执行mkdir,其他进程都会失败。
所以代码是:
if mkdir /var/lock/.myscript.exclusivelock
then
# do stuff
:
rmdir /var/lock/.myscript.exclusivelock
fi
你需要处理陈旧的锁,否则崩溃后你的脚本将永远不会再次运行。
其他回答
我有一个基于文件名的简单解决方案
#!/bin/bash
MY_FILENAME=`basename "$BASH_SOURCE"`
MY_PROCESS_COUNT=$(ps a -o pid,cmd | grep $MY_FILENAME | grep -v grep | grep -v $$ | wc -
l)
if [ $MY_PROCESS_COUNT -ne 0 ]; then
echo found another process
exit 0
if
# Follows the code to get the job done.
为了使锁定可靠,您需要一个原子操作。以上许多建议 不是原子的。建议的lockfile(1)实用程序作为手册页看起来很有前途 提到,它是“抗nfs”的。如果您的操作系统不支持lockfile(1)和 您的解决方案必须在NFS上工作,您没有太多的选项....
NFSv2有两个原子操作:
符号链接 重命名
在NFSv3中,create调用也是原子的。
目录操作在NFSv2和NFSv3下不是原子的(请参考Brent Callaghan的书“NFS Illustrated”,ISBN 0-201-32570-5;Brent是Sun的nfs老手)。
知道了这一点,你可以为文件和目录实现自旋锁(在shell中,而不是PHP中):
锁定当前目录:
while ! ln -s . lock; do :; done
锁定文件:
while ! ln -s ${f} ${f}.lock; do :; done
解锁当前目录(假设正在运行的进程真的获得了锁):
mv lock deleteme && rm deleteme
解锁文件(假设正在运行的进程真的获得了锁):
mv ${f}.lock ${f}.deleteme && rm ${f}.deleteme
Remove也不是原子的,因此首先是rename(它是原子的),然后是Remove。
对于符号链接和重命名调用,两个文件名必须驻留在同一个文件系统上。我的建议是:只使用简单的文件名(没有路径),把file和lock放在同一个目录下。
我对现有的答案有以下问题:
Some answers try to clean up lock files and then having to deal with stale lock files caused by e.g. sudden crash/reboot. IMO that is unnecessarily complicated. Let lock files stay. Some answers use script file itself $0 or $BASH_SOURCE for locking often referring to examples from man flock. This fails when script is replaced due to update or edit causing next run to open and obtain lock on the new script file even though another instance holding a lock on the removed file is still running. Few answers use a fixed file descriptor. This is not ideal. I do not want to rely on how this will behave e.g. opening lock file fails but gets mishandled and attempts to lock on unrelated file descriptor inherited from parent process. Another fail case is injecting locking wrapper for a 3rd party binary that does not handle locking itself but fixed file descriptors can interfere with file descriptor passing to child processes. I reject answers using process lookup for already running script name. There are several reasons for it, such as but not limited to reliability/atomicity, parsing output, and having script that does several related functions some of which do not require locking.
这个答案是:
rely on flock because it gets kernel to provide locking ... provided lock file is created atomically and not replaced. assume and rely on lock file being stored on the local filesystem as opposed to NFS. change lock file presence to NOT mean anything about a running instance. Its role is purely to prevent two concurrent instances creating file with same name and replacing another's copy. Lock file does not get deleted, it gets left behind and can survive across reboots. The locking is indicated via flock not via lock file presence. assume bash shell, as tagged by the question.
它不是一个联机程序,但是没有注释和错误消息,它足够小:
#!/bin/bash
LOCKFILE=/var/lock/TODO
set -o noclobber
exec {lockfd}<> "${LOCKFILE}" || exit 1
set +o noclobber # depends on what you need
flock --exclusive --nonblock ${lockfd} || exit 1
但我更喜欢注释和错误消息:
#!/bin/bash
# TODO Set a lock file name
LOCKFILE=/var/lock/myprogram.lock
# Set noclobber option to ensure lock file is not REPLACED.
set -o noclobber
# Open lock file for R+W on a new file descriptor
# and assign the new file descriptor to "lockfd" variable.
# This does NOT obtain a lock but ensures the file exists and opens it.
exec {lockfd}<> "${LOCKFILE}" || {
echo "pid=$$ failed to open LOCKFILE='${LOCKFILE}'" 1>&2
exit 1
}
# TODO!!!! undo/set the desired noclobber value for the remainder of the script
set +o noclobber
# Lock on the allocated file descriptor or fail
# Adjust flock options e.g. --noblock as needed
flock --exclusive --nonblock ${lockfd} || {
echo "pid=$$ failed to obtain lock fd='${lockfd}' LOCKFILE='${LOCKFILE}'" 1>&2
exit 1
}
# DO work here
echo "pid=$$ obtained exclusive lock fd='${lockfd}' LOCKFILE='${LOCKFILE}'"
# Can unlock after critical section and do more work after unlocking
#flock -u ${lockfd};
# if unlocking then might as well close lockfd too
#exec {lockfd}<&-
创建一个锁定文件在一个已知的位置,并检查是否存在脚本启动?如果有人试图追踪阻止脚本执行的错误实例,那么将PID放在文件中可能会有帮助。
我使用onlineer @脚本的开头:
#!/bin/bash
if [[ $(pgrep -afc "$(basename "$0")") -gt "1" ]]; then echo "Another instance of "$0" has already been started!" && exit; fi
.
the_beginning_of_actual_script
在内存中看到进程的存在是很好的(不管进程的状态是什么);但它对我很有用。