有一个在线文件(如http://www.example.com/information.asp),我需要抓取并保存到一个目录。我知道有几种逐行抓取和读取在线文件(url)的方法,但是否有一种方法可以使用Java下载并保存文件?


当前回答

更简单的非阻塞I/O用法:

URL website = new URL("http://www.website.com/information.asp");
try (InputStream in = website.openStream()) {
    Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING);
}

其他回答

import java.io.*;
import java.net.*;

public class filedown {
    public static void download(String address, String localFileName) {
        OutputStream out = null;
        URLConnection conn = null;
        InputStream in = null;

        try {
            URL url = new URL(address);
            out = new BufferedOutputStream(new FileOutputStream(localFileName));
            conn = url.openConnection();
            in = conn.getInputStream();
            byte[] buffer = new byte[1024];

            int numRead;
            long numWritten = 0;

            while ((numRead = in.read(buffer)) != -1) {
                out.write(buffer, 0, numRead);
                numWritten += numRead;
            }

            System.out.println(localFileName + "\t" + numWritten);
        } 
        catch (Exception exception) { 
            exception.printStackTrace();
        } 
        finally {
            try {
                if (in != null) {
                    in.close();
                }
                if (out != null) {
                    out.close();
                }
            } 
            catch (IOException ioe) {
            }
        }
    }

    public static void download(String address) {
        int lastSlashIndex = address.lastIndexOf('/');
        if (lastSlashIndex >= 0 &&
        lastSlashIndex < address.length() - 1) {
            download(address, (new URL(address)).getFile());
        } 
        else {
            System.err.println("Could not figure out local file name for "+address);
        }
    }

    public static void main(String[] args) {
        for (int i = 0; i < args.length; i++) {
            download(args[i]);
        }
    }
}

使用Apache Commons IO。它只是一行代码:

FileUtils.copyURLToFile(URL, File)

在underscore-java库中有一个方法U.fetch(url)。

文件pom.xml:

<dependency>
  <groupId>com.github.javadev</groupId>
  <artifactId>underscore</artifactId>
  <version>1.84</version>
</dependency>

代码示例:

import com.github.underscore.U;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;

public class Download {
    public static void main(String[] args) throws IOException {
        Files.write(Paths.get("data.bin"),
            U.fetch("https://stackoverflow.com/questions"
                + "/921262/how-to-download-and-save-a-file-from-internet-using-java").blob());
    }
}

简单使用有一个问题:

org.apache.commons.io.FileUtils.copyURLToFile(URL, File)

如果你需要下载和保存非常大的文件,或者在一般情况下,如果你需要自动重试以防连接断开。

在这种情况下,我建议使用Apache HttpClient以及org.apache.commons.io.FileUtils。例如:

GetMethod method = new GetMethod(resource_url);
try {
    int statusCode = client.executeMethod(method);
    if (statusCode != HttpStatus.SC_OK) {
        logger.error("Get method failed: " + method.getStatusLine());
    }
    org.apache.commons.io.FileUtils.copyInputStreamToFile(
        method.getResponseBodyAsStream(), new File(resource_file));
    } catch (HttpException e) {
        e.printStackTrace();
    } catch (IOException e) {
        e.printStackTrace();
    } finally {
    method.releaseConnection();
}

你可以在一行中使用netloader for Java:

new NetFile(new File("my/zips/1.zip"), "https://example.com/example.zip", -1).load(); // Returns true if succeed, otherwise false.