有一个在线文件(如http://www.example.com/information.asp),我需要抓取并保存到一个目录。我知道有几种逐行抓取和读取在线文件(url)的方法,但是否有一种方法可以使用Java下载并保存文件?


当前回答

下面是一个简洁的、可读的、仅使用jdk的解决方案,其中包含适当的封闭资源:

static long download(String url, String fileName) throws IOException {
    try (InputStream in = URI.create(url).toURL().openStream()) {
        return Files.copy(in, Paths.get(fileName));
    }
}

两行代码,没有依赖关系。

下面是一个完整的文件下载示例程序,包含输出、错误检查和命令行参数检查:

package so.downloader;

import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.nio.file.Files;
import java.nio.file.Paths;

public class Application {
    public static void main(String[] args) throws IOException {
        if (2 != args.length) {
            System.out.println("USAGE: java -jar so-downloader.jar <source-URL> <target-filename>");
            System.exit(1);
        }

        String sourceUrl = args[0];
        String targetFilename = args[1];

        long bytesDownloaded = download(sourceUrl, targetFilename);

        System.out.println(String.format("Downloaded %d bytes from %s to %s.", bytesDownloaded, sourceUrl, targetFilename));
    }

    static long download(String url, String fileName) throws IOException {
        try (InputStream in = URI.create(url).toURL().openStream()) {
            return Files.copy(in, Paths.get(fileName));
        }
    }    
}

正如so-downloader存储库README中所指出的:

运行文件下载程序:

java -jar so-downloader.jar <source-URL> <target-filename>

例如:

java -jar so-downloader.jar https://github.com/JanStureNielsen/so-downloader/archive/main.zip so-downloader-source.zip

其他回答

总结(并以某种方式润色和更新)之前的答案。以下三种方法实际上是等效的。(我添加了明确的超时,因为我认为这是必须的。没有人希望下载在连接丢失时永远冻结。)

public static void saveUrl1(final Path file, final URL url,
    int secsConnectTimeout, int secsReadTimeout))
    throws MalformedURLException, IOException {

    // Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
    try (BufferedInputStream in = new BufferedInputStream(
         streamFromUrl(url, secsConnectTimeout,secsReadTimeout));
         OutputStream fout = Files.newOutputStream(file)) {

            final byte data[] = new byte[8192];
            int count;
            while((count = in.read(data)) > 0)
                fout.write(data, 0, count);
        }
}

public static void saveUrl2(final Path file, final URL url,
    int secsConnectTimeout, int secsReadTimeout))
    throws MalformedURLException, IOException {

    // Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
    try (ReadableByteChannel rbc = Channels.newChannel(
             streamFromUrl(url, secsConnectTimeout, secsReadTimeout)
        );
        FileChannel channel = FileChannel.open(file,
             StandardOpenOption.CREATE,
             StandardOpenOption.TRUNCATE_EXISTING,
             StandardOpenOption.WRITE)
        ) {

        channel.transferFrom(rbc, 0, Long.MAX_VALUE);
    }
}

public static void saveUrl3(final Path file, final URL url,
    int secsConnectTimeout, int secsReadTimeout))
    throws MalformedURLException, IOException {

    // Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
    try (InputStream in = streamFromUrl(url, secsConnectTimeout,secsReadTimeout) ) {
        Files.copy(in, file, StandardCopyOption.REPLACE_EXISTING);
    }
}

public static InputStream streamFromUrl(URL url,int secsConnectTimeout,int secsReadTimeout) throws IOException {
    URLConnection conn = url.openConnection();
    if(secsConnectTimeout>0)
        conn.setConnectTimeout(secsConnectTimeout*1000);
    if(secsReadTimeout>0)
        conn.setReadTimeout(secsReadTimeout*1000);
    return conn.getInputStream();
}

我没有发现明显的差异,在我看来都是对的。它们既安全又高效。(速度的差异似乎无关紧要——我从本地服务器写入180 MB到SSD磁盘的时间大约在1.2到1.5秒之间波动)。它们不需要外部库。所有这些都可以使用任意大小和(根据我的经验)HTTP重定向。

此外,如果没有找到资源(通常是404错误),所有抛出FileNotFoundException,如果DNS解析失败则抛出java.net.UnknownHostException;其他IOException对应传输过程中的错误。

简单使用有一个问题:

org.apache.commons.io.FileUtils.copyURLToFile(URL, File)

如果你需要下载和保存非常大的文件,或者在一般情况下,如果你需要自动重试以防连接断开。

在这种情况下,我建议使用Apache HttpClient以及org.apache.commons.io.FileUtils。例如:

GetMethod method = new GetMethod(resource_url);
try {
    int statusCode = client.executeMethod(method);
    if (statusCode != HttpStatus.SC_OK) {
        logger.error("Get method failed: " + method.getStatusLine());
    }
    org.apache.commons.io.FileUtils.copyInputStreamToFile(
        method.getResponseBodyAsStream(), new File(resource_file));
    } catch (HttpException e) {
        e.printStackTrace();
    } catch (IOException e) {
        e.printStackTrace();
    } finally {
    method.releaseConnection();
}

试试Java NIO:

URL website = new URL("http://www.website.com/information.asp");
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream("information.html");
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);

使用transferFrom()可能比从源通道读取并写入此通道的简单循环更有效。许多操作系统可以直接将字节从源通道传输到文件系统缓存中,而不需要实际复制它们。

点击这里查看更多信息。

注意:transferFrom中的第三个参数是传输的最大字节数。整数。MAX_VALUE将传输最多2^31字节,长。MAX_VALUE最多允许2^63字节(比现有的任何文件都大)。

这是另一个基于Brian Risk的答案的Java 7变体,使用了try-with语句:

public static void downloadFileFromURL(String urlString, File destination) throws Throwable {

    URL website = new URL(urlString);
    try(
        ReadableByteChannel rbc = Channels.newChannel(website.openStream());
        FileOutputStream fos = new FileOutputStream(destination);
       ) {

        fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
    }
}

这个答案几乎和选中的答案完全一样,但是有两个增强:它是一个方法,它关闭了FileOutputStream对象:

    public static void downloadFileFromURL(String urlString, File destination) {
        try {
            URL website = new URL(urlString);
            ReadableByteChannel rbc;
            rbc = Channels.newChannel(website.openStream());
            FileOutputStream fos = new FileOutputStream(destination);
            fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
            fos.close();
            rbc.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }