当在bash或*NIX中的任何其他shell中编写脚本时,在运行需要超过几秒钟时间的命令时,需要一个进度条。

例如,复制一个大文件,打开一个大tar文件。

你建议用什么方法向shell脚本添加进度条?


当前回答

我为嵌入式系统做了一个纯shell版本,利用了:

/usr/bin/dd的SIGUSR1信号处理特性。 基本上,如果您发送'kill SIGUSR1 $(pid_of_running_dd_process)',它将输出 吞吐量速度和传输量的摘要。 后台dd,然后定期查询它的更新,并生成 像老式的FTP客户端一样。 使用/dev/stdout作为非stdout友好程序(如scp)的目的地

最终的结果允许你进行任何文件传输操作,并获得进度更新,看起来像老式的FTP“哈希”输出,在那里你只需要为每个X字节获得一个哈希标记。

这几乎不是产品质量代码,但您可以理解。我觉得很可爱。

不管怎样,实际的字节计数可能不会正确地反映在哈希数中——根据舍入问题,可能会多一个或少一个。不要将它用作测试脚本的一部分,它只是花瓶。而且,是的,我知道这是非常低效的——这是一个shell脚本,我不为此道歉。

最后提供了使用wget、scp和tftp的示例。它应该与任何发出数据的东西一起工作。确保对标准输出不友好的程序使用/dev/stdout。

#!/bin/sh
#
# Copyright (C) Nathan Ramella (nar+progress-script@remix.net) 2010 
# LGPLv2 license
# If you use this, send me an email to say thanks and let me know what your product
# is so I can tell all my friends I'm a big man on the internet!

progress_filter() {

        local START=$(date +"%s")
        local SIZE=1
        local DURATION=1
        local BLKSZ=51200
        local TMPFILE=/tmp/tmpfile
        local PROGRESS=/tmp/tftp.progress
        local BYTES_LAST_CYCLE=0
        local BYTES_THIS_CYCLE=0

        rm -f ${PROGRESS}

        dd bs=$BLKSZ of=${TMPFILE} 2>&1 \
                | grep --line-buffered -E '[[:digit:]]* bytes' \
                | awk '{ print $1 }' >> ${PROGRESS} &

        # Loop while the 'dd' exists. It would be 'more better' if we
        # actually looked for the specific child ID of the running 
        # process by identifying which child process it was. If someone
        # else is running dd, it will mess things up.

        # My PID handling is dumb, it assumes you only have one running dd on
        # the system, this should be fixed to just get the PID of the child
        # process from the shell.

        while [ $(pidof dd) -gt 1 ]; do

                # PROTIP: You can sleep partial seconds (at least on linux)
                sleep .5    

                # Force dd to update us on it's progress (which gets
                # redirected to $PROGRESS file.
                # 
                # dumb pid handling again
                pkill -USR1 dd

                local BYTES_THIS_CYCLE=$(tail -1 $PROGRESS)
                local XFER_BLKS=$(((BYTES_THIS_CYCLE-BYTES_LAST_CYCLE)/BLKSZ))

                # Don't print anything unless we've got 1 block or more.
                # This allows for stdin/stderr interactions to occur
                # without printing a hash erroneously.

                # Also makes it possible for you to background 'scp',
                # but still use the /dev/stdout trick _even_ if scp
                # (inevitably) asks for a password. 
                #
                # Fancy!

                if [ $XFER_BLKS -gt 0 ]; then
                        printf "#%0.s" $(seq 0 $XFER_BLKS)
                        BYTES_LAST_CYCLE=$BYTES_THIS_CYCLE
                fi
        done

        local SIZE=$(stat -c"%s" $TMPFILE)
        local NOW=$(date +"%s")

        if [ $NOW -eq 0 ]; then
                NOW=1
        fi

        local DURATION=$(($NOW-$START))
        local BYTES_PER_SECOND=$(( SIZE / DURATION ))
        local KBPS=$((SIZE/DURATION/1024))
        local MD5=$(md5sum $TMPFILE | awk '{ print $1 }')

        # This function prints out ugly stuff suitable for eval() 
        # rather than a pretty string. This makes it a bit more 
        # flexible if you have a custom format (or dare I say, locale?)

        printf "\nDURATION=%d\nBYTES=%d\nKBPS=%f\nMD5=%s\n" \
            $DURATION \
            $SIZE \
            $KBPS \
            $MD5
}

例子:

echo "wget"
wget -q -O /dev/stdout http://www.blah.com/somefile.zip | progress_filter

echo "tftp"
tftp -l /dev/stdout -g -r something/firmware.bin 192.168.1.1 | progress_filter

echo "scp"
scp user@192.168.1.1:~/myfile.tar /dev/stdout | progress_filter

其他回答

首先,杆并不是唯一的管道进度仪表。另一个(可能更广为人知)是pv(管道查看器)。

其次,bar和pv可以这样使用:

$ bar file1 | wc -l 
$ pv file1 | wc -l

甚至:

$ tail -n 100 file1 | bar | wc -l
$ tail -n 100 file1 | pv | wc -l

如果你想在命令中使用bar和pv来处理参数中给出的文件,比如copy file1 file2,一个有用的技巧是使用进程替换:

$ copy <(bar file1) file2
$ copy <(pv file1) file2

进程替换是bash的一个神奇的东西,它创建临时fifo管道文件/dev/fd/,并通过该管道连接运行进程(括号内)的stdout,复制看到它就像一个普通文件一样(只有一个例外,它只能向前读取)。

更新:

Bar命令本身也允许复制。男子酒吧后:

bar --in-file /dev/rmt/1cbn --out-file \
     tape-restore.tar --size 2.4g --buffer-size 64k

但是在我看来,过程替换是更通用的方法。它本身使用cp程序。

我以“恐惧之边”提供的答案为基础

它连接到Oracle数据库以检索RMAN恢复的进度。

#!/bin/bash

 # 1. Create ProgressBar function
 # 1.1 Input is currentState($1) and totalState($2)
 function ProgressBar {
 # Process data
let _progress=(${1}*100/${2}*100)/100
let _done=(${_progress}*4)/10
let _left=40-$_done
# Build progressbar string lengths
_fill=$(printf "%${_done}s")
_empty=$(printf "%${_left}s")

# 1.2 Build progressbar strings and print the ProgressBar line
# 1.2.1 Output example:
# 1.2.1.1 Progress : [########################################] 100%
printf "\rProgress : [${_fill// /#}${_empty// /-}] ${_progress}%%"

}

function rman_check {
sqlplus -s / as sysdba <<EOF
set heading off
set feedback off
select
round((sofar/totalwork) * 100,0) pct_done
from
v\$session_longops
where
totalwork > sofar
AND
opname NOT LIKE '%aggregate%'
AND
opname like 'RMAN%';
exit
EOF
}

# Variables
_start=1

# This accounts as the "totalState" variable for the ProgressBar function
_end=100

_rman_progress=$(rman_check)
#echo ${_rman_progress}

# Proof of concept
#for number in $(seq ${_start} ${_end})

while [ ${_rman_progress} -lt 100 ]
do

for number in _rman_progress
do
sleep 10
ProgressBar ${number} ${_end}
done

_rman_progress=$(rman_check)

done
printf '\nFinished!\n'

对我来说,到目前为止最容易使用和最好看的是命令pv或bar,就像某人已经写的那样

例如:需要用dd备份整个驱动器

通常你使用dd if="$input_drive_path" of="$output_file_path"

对于pv,你可以这样做:

如果dd = input_drive_path美元“硒| | dd =“output_file_path美元”

进程直接进入STDOUT,如下所示:

    7.46GB 0:33:40 [3.78MB/s] [  <=>                                            ]

做完之后,总结就出来了

    15654912+0 records in
    15654912+0 records out
    8015314944 bytes (8.0 GB) copied, 2020.49 s, 4.0 MB/s

My solution displays the percentage of the tarball that is currently being uncompressed and written. I use this when writing out 2GB root filesystem images. You really need a progress bar for these things. What I do is use gzip --list to get the total uncompressed size of the tarball. From that I calculate the blocking-factor needed to divide the file into 100 parts. Finally, I print a checkpoint message for each block. For a 2GB file this gives about 10MB a block. If that is too big then you can divide the BLOCKING_FACTOR by 10 or 100, but then it's harder to print pretty output in terms of a percentage.

假设您正在使用Bash,那么您可以使用 shell函数

untar_progress () 
{ 
  TARBALL=$1
  BLOCKING_FACTOR=$(gzip --list ${TARBALL} |
    perl -MPOSIX -ane '$.==2 && print ceil $F[1]/50688')
  tar --blocking-factor=${BLOCKING_FACTOR} --checkpoint=1 \
    --checkpoint-action='ttyout=Wrote %u%  \r' -zxf ${TARBALL}
}

GNU tar有一个有用的选项,它提供了一个简单的进度条功能。

(…)另一个可用的检查点操作是' dot '(或' . ')。它指示tar在标准列表流上打印单个点,例如:

$ tar -c --checkpoint=1000 --checkpoint-action=dot /var
...

同样的效果可以通过:

$ tar -c --checkpoint=.1000 /var