如何在bash脚本中等待从该脚本派生的几个子进程完成,然后在任何子进程以code !=0结束时返回退出代码!=0?

简单的脚本:

#!/bin/bash
for i in `seq 0 9`; do
  doCalculations $i &
done
wait

上面的脚本将等待所有10个子进程,但它总是给出退出状态0(参见help wait)。我如何修改这个脚本,以便它将发现衍生子进程的退出状态,并在任何子进程以code !=0结束时返回退出代码1 ?

有没有比收集子进程的pid、按顺序等待它们并求和退出状态更好的解决方案呢?


当前回答

简单地说:

#!/bin/bash

pids=""

for i in `seq 0 9`; do
   doCalculations $i &
   pids="$pids $!"
done

wait $pids

...code continued here ...

更新:

正如许多评论者指出的那样,上面的代码在继续之前会等待所有进程完成,但如果其中一个进程失败了,它不会退出,也不会失败,可以通过@Bryan, @SamBrightman和其他人建议的以下修改来完成:

#!/bin/bash

pids=""
RESULT=0


for i in `seq 0 9`; do
   doCalculations $i &
   pids="$pids $!"
done

for pid in $pids; do
    wait $pid || let "RESULT=1"
done

if [ "$RESULT" == "1" ];
    then
       exit 1
fi

...code continued here ...

其他回答

我刚刚修改了一个脚本到后台和并行化的过程。

我做了一些实验(在Solaris上使用bash和ksh),发现如果退出状态不为零,'wait'将输出退出状态,或者当没有提供PID参数时,将输出一个返回非零退出的作业列表。如。

Bash:

$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]-  Exit 2                  sleep 20 && exit 2
[2]+  Exit 1                  sleep 10 && exit 1

Ksh:

$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]+  Done(2)                  sleep 20 && exit 2
[2]+  Done(1)                  sleep 10 && exit 1

这个输出被写入stderr,所以OPs示例的简单解决方案可以是:

#!/bin/bash

trap "rm -f /tmp/x.$$" EXIT

for i in `seq 0 9`; do
  doCalculations $i &
done

wait 2> /tmp/x.$$
if [ `wc -l /tmp/x.$$` -gt 0 ] ; then
  exit 1
fi

虽然这:

wait 2> >(wc -l)

也将返回一个计数,但不包含TMP文件。这也可以这样使用,例如:

wait 2> >(if [ `wc -l` -gt 0 ] ; then echo "ERROR"; fi)

但是这并不比tmp文件有用多少。我找不到一种有效的方法来避免tmp文件,同时也避免在子shell中运行“等待”,这根本不会起作用。

使用'wait -n'来等待多个子进程,并在其中任何一个进程以非零状态码退出时退出。

#!/bin/bash
wait_for_pids()
{
    for (( i = 1; i <= $#; i++ )) do
        wait -n $@
        status=$?
        echo "received status: "$status
        if [ $status -ne 0 ] && [ $status -ne 127 ]; then
            exit 1
        fi
    done
}

sleep_for_10()
{
    sleep 10
    exit 10
}

sleep_for_20()
{
    sleep 20
}

sleep_for_10 &
pid1=$!

sleep_for_20 &
pid2=$!

wait_for_pids $pid2 $pid1

状态代码'127'是不存在的进程,这意味着子进程可能已经退出。

为了将此并行化…

for i in $(whatever_list) ; do
   do_something $i
done

翻译成这样…

for i in $(whatever_list) ; do echo $i ; done | ## execute in parallel...
   (
   export -f do_something ## export functions (if needed)
   export PATH ## export any variables that are required
   xargs -I{} --max-procs 0 bash -c ' ## process in batches...
      {
      echo "processing {}" ## optional
      do_something {}
      }' 
   )

If an error occurs in one process, it won't interrupt the other processes, but it will result in a non-zero exit code from the sequence as a whole. Exporting functions and variables may or may not be necessary, in any particular case. You can set --max-procs based on how much parallelism you want (0 means "all at once"). GNU Parallel offers some additional features when used in place of xargs -- but it isn't always installed by default. The for loop isn't strictly necessary in this example since echo $i is basically just regenerating the output of $(whatever_list). I just think the use of the for keyword makes it a little easier to see what is going on. Bash string handling can be confusing -- I have found that using single quotes works best for wrapping non-trivial scripts. You can easily interrupt the entire operation (using ^C or similar), unlike the the more direct approach to Bash parallelism.

下面是一个简化的工作示例……

for i in {0..5} ; do echo $i ; done |xargs -I{} --max-procs 2 bash -c '
   {
   echo sleep {}
   sleep 2s
   }'

我有一个类似的情况,但有各种各样的问题与循环子shell,确保这里的其他解决方案不能工作,所以我让我的循环编写脚本,我将运行,等待结束。有效:

#!/bin/bash
echo > tmpscript.sh
for i in `seq 0 9`; do
    echo "doCalculations $i &" >> tmpscript.sh
done
echo "wait" >> tmpscript.sh
chmod u+x tmpscript.sh
./tmpscript.sh

愚蠢,但简单,并帮助调试一些事后的事情。

如果我有时间,我会更深入地了解GNU并行,但这对我自己的“doCalculations”过程来说很困难。

wait还(可选地)接受要等待的进程的PID,并且使用$!你会得到最后一个命令的PID在后台启动。 修改循环,将每个衍生子进程的PID存储到一个数组中,然后再次循环等待每个PID。

# run processes and store pids in array
for i in $n_procs; do
    ./procs[${i}] &
    pids[${i}]=$!
done

# wait for all pids
for pid in ${pids[*]}; do
    wait $pid
done