有什么快速而简单的方法可以确保在给定时间内只有一个shell脚本实例在运行?


当前回答

又快又脏?脚本顶部的一行代码可以工作:

[[ $(pgrep -c "`basename \"$0\"`") -gt 1 ]] && exit

当然,只要确保您的脚本名称是唯一的。:)

其他回答

又快又脏?

#!/bin/sh

if [ -f sometempfile ]
  echo "Already running... will now terminate."
  exit
else
  touch sometempfile
fi

..do what you want here..

rm sometempfile

对于shell脚本,我倾向于使用mkdir而不是flock,因为它使锁更可移植。

不管怎样,使用set -e是不够的。它只在任何命令失败时退出脚本。你的锁还是会留下的。

为了正确的锁清理,你真的应该把你的陷阱设置成这样的伪代码(提取,简化和未经测试,但来自积极使用的脚本):

#=======================================================================
# Predefined Global Variables
#=======================================================================

TMPDIR=/tmp/myapp
[[ ! -d $TMP_DIR ]] \
    && mkdir -p $TMP_DIR \
    && chmod 700 $TMPDIR

LOCK_DIR=$TMP_DIR/lock

#=======================================================================
# Functions
#=======================================================================

function mklock {
    __lockdir="$LOCK_DIR/$(date +%s.%N).$$" # Private Global. Use Epoch.Nano.PID

    # If it can create $LOCK_DIR then no other instance is running
    if $(mkdir $LOCK_DIR)
    then
        mkdir $__lockdir  # create this instance's specific lock in queue
        LOCK_EXISTS=true  # Global
    else
        echo "FATAL: Lock already exists. Another copy is running or manually lock clean up required."
        exit 1001  # Or work out some sleep_while_execution_lock elsewhere
    fi
}

function rmlock {
    [[ ! -d $__lockdir ]] \
        && echo "WARNING: Lock is missing. $__lockdir does not exist" \
        || rmdir $__lockdir
}

#-----------------------------------------------------------------------
# Private Signal Traps Functions {{{2
#
# DANGER: SIGKILL cannot be trapped. So, try not to `kill -9 PID` or 
#         there will be *NO CLEAN UP*. You'll have to manually remove 
#         any locks in place.
#-----------------------------------------------------------------------
function __sig_exit {

    # Place your clean up logic here 

    # Remove the LOCK
    [[ -n $LOCK_EXISTS ]] && rmlock
}

function __sig_int {
    echo "WARNING: SIGINT caught"    
    exit 1002
}

function __sig_quit {
    echo "SIGQUIT caught"
    exit 1003
}

function __sig_term {
    echo "WARNING: SIGTERM caught"    
    exit 1015
}

#=======================================================================
# Main
#=======================================================================

# Set TRAPs
trap __sig_exit EXIT    # SIGEXIT
trap __sig_int INT      # SIGINT
trap __sig_quit QUIT    # SIGQUIT
trap __sig_term TERM    # SIGTERM

mklock

# CODE

exit # No need for cleanup code here being in the __sig_exit trap function

接下来会发生什么。所有陷阱都会产生一个出口,所以__sig_exit函数总是会发生(除非SIGKILL),它会清理你的锁。

注意:我的退出值不是低值。为什么?各种批处理系统生成或期望数字0到31。将它们设置为其他内容,我可以让我的脚本和批处理流对前一个批处理作业或脚本做出相应的反应。

下面是一种方法,它结合了原子目录锁定和通过PID检查过期锁,如果过期就重新启动。此外,这并不依赖于任何羞怯。

#!/bin/dash

SCRIPTNAME=$(basename $0)
LOCKDIR="/var/lock/${SCRIPTNAME}"
PIDFILE="${LOCKDIR}/pid"

if ! mkdir $LOCKDIR 2>/dev/null
then
    # lock failed, but check for stale one by checking if the PID is really existing
    PID=$(cat $PIDFILE)
    if ! kill -0 $PID 2>/dev/null
    then
       echo "Removing stale lock of nonexistent PID ${PID}" >&2
       rm -rf $LOCKDIR
       echo "Restarting myself (${SCRIPTNAME})" >&2
       exec "$0" "$@"
    fi
    echo "$SCRIPTNAME is already running, bailing out" >&2
    exit 1
else
    # lock successfully acquired, save PID
    echo $$ > $PIDFILE
fi

trap "rm -rf ${LOCKDIR}" QUIT INT TERM EXIT


echo hello

sleep 30s

echo bye

当目标是Debian机器时,我发现lockfile-progs包是一个很好的解决方案。Procmail还附带了一个锁文件工具。然而,有时这两种情况我都无法解决。

下面是我的解决方案,它使用mkdir来检测原子性,并使用PID文件来检测过期的锁。这段代码目前在Cygwin安装环境中运行,运行良好。

要使用它,当您需要独占访问某些东西时,只需调用exclusive_lock_require。一个可选的锁名参数允许您在不同的脚本之间共享锁。如果需要更复杂的功能,还有两个较低级别的函数(exclusive_lock_try和exclusive_lock_retry)。

function exclusive_lock_try() # [lockname]
{

    local LOCK_NAME="${1:-`basename $0`}"

    LOCK_DIR="/tmp/.${LOCK_NAME}.lock"
    local LOCK_PID_FILE="${LOCK_DIR}/${LOCK_NAME}.pid"

    if [ -e "$LOCK_DIR" ]
    then
        local LOCK_PID="`cat "$LOCK_PID_FILE" 2> /dev/null`"
        if [ ! -z "$LOCK_PID" ] && kill -0 "$LOCK_PID" 2> /dev/null
        then
            # locked by non-dead process
            echo "\"$LOCK_NAME\" lock currently held by PID $LOCK_PID"
            return 1
        else
            # orphaned lock, take it over
            ( echo $$ > "$LOCK_PID_FILE" ) 2> /dev/null && local LOCK_PID="$$"
        fi
    fi
    if [ "`trap -p EXIT`" != "" ]
    then
        # already have an EXIT trap
        echo "Cannot get lock, already have an EXIT trap"
        return 1
    fi
    if [ "$LOCK_PID" != "$$" ] &&
        ! ( umask 077 && mkdir "$LOCK_DIR" && umask 177 && echo $$ > "$LOCK_PID_FILE" ) 2> /dev/null
    then
        local LOCK_PID="`cat "$LOCK_PID_FILE" 2> /dev/null`"
        # unable to acquire lock, new process got in first
        echo "\"$LOCK_NAME\" lock currently held by PID $LOCK_PID"
        return 1
    fi
    trap "/bin/rm -rf \"$LOCK_DIR\"; exit;" EXIT

    return 0 # got lock

}

function exclusive_lock_retry() # [lockname] [retries] [delay]
{

    local LOCK_NAME="$1"
    local MAX_TRIES="${2:-5}"
    local DELAY="${3:-2}"

    local TRIES=0
    local LOCK_RETVAL

    while [ "$TRIES" -lt "$MAX_TRIES" ]
    do

        if [ "$TRIES" -gt 0 ]
        then
            sleep "$DELAY"
        fi
        local TRIES=$(( $TRIES + 1 ))

        if [ "$TRIES" -lt "$MAX_TRIES" ]
        then
            exclusive_lock_try "$LOCK_NAME" > /dev/null
        else
            exclusive_lock_try "$LOCK_NAME"
        fi
        LOCK_RETVAL="${PIPESTATUS[0]}"

        if [ "$LOCK_RETVAL" -eq 0 ]
        then
            return 0
        fi

    done

    return "$LOCK_RETVAL"

}

function exclusive_lock_require() # [lockname] [retries] [delay]
{
    if ! exclusive_lock_retry "$@"
    then
        exit 1
    fi
}

看看FLOM (Free LOck Manager) http://sourceforge.net/projects/flom/:,您可以使用文件系统中不需要锁文件的抽象资源来同步命令和/或脚本。您可以在没有NFS(网络文件系统)服务器这样的NAS(网络附加存储)的情况下同步在不同系统中运行的命令。

使用最简单的用例,序列化“command1”和“command2”可能和执行一样简单:

flom -- command1

and

flom -- command2

来自两个不同的shell脚本。