有一个在线文件(如http://www.example.com/information.asp),我需要抓取并保存到一个目录。我知道有几种逐行抓取和读取在线文件(url)的方法,但是否有一种方法可以使用Java下载并保存文件?


当前回答

就我个人而言,我发现Apache的HttpClient在这方面比我需要做的任何事情都有能力。这里有一个关于使用HttpClient的很棒的教程

其他回答

public void saveUrl(final String filename, final String urlString)
        throws MalformedURLException, IOException {
    BufferedInputStream in = null;
    FileOutputStream fout = null;
    try {
        in = new BufferedInputStream(new URL(urlString).openStream());
        fout = new FileOutputStream(filename);

        final byte data[] = new byte[1024];
        int count;
        while ((count = in.read(data, 0, 1024)) != -1) {
            fout.write(data, 0, count);
        }
    } finally {
        if (in != null) {
            in.close();
        }
        if (fout != null) {
            fout.close();
        }
    }
}

您将需要处理异常,可能是该方法的外部异常。

你可以在一行中使用netloader for Java:

new NetFile(new File("my/zips/1.zip"), "https://example.com/example.zip", -1).load(); // Returns true if succeed, otherwise false.

简单使用有一个问题:

org.apache.commons.io.FileUtils.copyURLToFile(URL, File)

如果你需要下载和保存非常大的文件,或者在一般情况下,如果你需要自动重试以防连接断开。

在这种情况下,我建议使用Apache HttpClient以及org.apache.commons.io.FileUtils。例如:

GetMethod method = new GetMethod(resource_url);
try {
    int statusCode = client.executeMethod(method);
    if (statusCode != HttpStatus.SC_OK) {
        logger.error("Get method failed: " + method.getStatusLine());
    }
    org.apache.commons.io.FileUtils.copyInputStreamToFile(
        method.getResponseBodyAsStream(), new File(resource_file));
    } catch (HttpException e) {
        e.printStackTrace();
    } catch (IOException e) {
        e.printStackTrace();
    } finally {
    method.releaseConnection();
}

下面是一个简洁的、可读的、仅使用jdk的解决方案,其中包含适当的封闭资源:

static long download(String url, String fileName) throws IOException {
    try (InputStream in = URI.create(url).toURL().openStream()) {
        return Files.copy(in, Paths.get(fileName));
    }
}

两行代码,没有依赖关系。

下面是一个完整的文件下载示例程序,包含输出、错误检查和命令行参数检查:

package so.downloader;

import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.nio.file.Files;
import java.nio.file.Paths;

public class Application {
    public static void main(String[] args) throws IOException {
        if (2 != args.length) {
            System.out.println("USAGE: java -jar so-downloader.jar <source-URL> <target-filename>");
            System.exit(1);
        }

        String sourceUrl = args[0];
        String targetFilename = args[1];

        long bytesDownloaded = download(sourceUrl, targetFilename);

        System.out.println(String.format("Downloaded %d bytes from %s to %s.", bytesDownloaded, sourceUrl, targetFilename));
    }

    static long download(String url, String fileName) throws IOException {
        try (InputStream in = URI.create(url).toURL().openStream()) {
            return Files.copy(in, Paths.get(fileName));
        }
    }    
}

正如so-downloader存储库README中所指出的:

运行文件下载程序:

java -jar so-downloader.jar <source-URL> <target-filename>

例如:

java -jar so-downloader.jar https://github.com/JanStureNielsen/so-downloader/archive/main.zip so-downloader-source.zip

可以使用Apache的HttpComponents而不是Commons IO来下载文件。这段代码允许您根据URL在Java中下载文件,并将其保存到特定的目的地。

public static boolean saveFile(URL fileURL, String fileSavePath) {

    boolean isSucceed = true;

    CloseableHttpClient httpClient = HttpClients.createDefault();

    HttpGet httpGet = new HttpGet(fileURL.toString());
    httpGet.addHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0");
    httpGet.addHeader("Referer", "https://www.google.com");

    try {
        CloseableHttpResponse httpResponse = httpClient.execute(httpGet);
        HttpEntity fileEntity = httpResponse.getEntity();

        if (fileEntity != null) {
            FileUtils.copyInputStreamToFile(fileEntity.getContent(), new File(fileSavePath));
        }

    } catch (IOException e) {
        isSucceed = false;
    }

    httpGet.releaseConnection();

    return isSucceed;
}

与单行代码相比:

FileUtils.copyURLToFile(fileURL, new File(fileSavePath),
                        URLS_FETCH_TIMEOUT, URLS_FETCH_TIMEOUT);

这段代码将使您对进程有更多的控制,不仅可以指定超时,还可以指定User-Agent和Referer值,这对许多网站来说都是至关重要的。