有一个在线文件(如http://www.example.com/information.asp),我需要抓取并保存到一个目录。我知道有几种逐行抓取和读取在线文件(url)的方法,但是否有一种方法可以使用Java下载并保存文件?


当前回答

这可以读取互联网上的文件,并将其写入文件。

import java.net.URL;
import java.io.FileOutputStream;
import java.io.File;

public class Download {
    public static void main(String[] args) throws Exception {
         URL url = new URL("https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png");  // Input URL
         FileOutputStream out = new FileOutputStream(new File("out.png"));  // Output file
         out.write(url.openStream().readAllBytes());
         out.close();
    }
}

其他回答

总结(并以某种方式润色和更新)之前的答案。以下三种方法实际上是等效的。(我添加了明确的超时,因为我认为这是必须的。没有人希望下载在连接丢失时永远冻结。)

public static void saveUrl1(final Path file, final URL url,
    int secsConnectTimeout, int secsReadTimeout))
    throws MalformedURLException, IOException {

    // Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
    try (BufferedInputStream in = new BufferedInputStream(
         streamFromUrl(url, secsConnectTimeout,secsReadTimeout));
         OutputStream fout = Files.newOutputStream(file)) {

            final byte data[] = new byte[8192];
            int count;
            while((count = in.read(data)) > 0)
                fout.write(data, 0, count);
        }
}

public static void saveUrl2(final Path file, final URL url,
    int secsConnectTimeout, int secsReadTimeout))
    throws MalformedURLException, IOException {

    // Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
    try (ReadableByteChannel rbc = Channels.newChannel(
             streamFromUrl(url, secsConnectTimeout, secsReadTimeout)
        );
        FileChannel channel = FileChannel.open(file,
             StandardOpenOption.CREATE,
             StandardOpenOption.TRUNCATE_EXISTING,
             StandardOpenOption.WRITE)
        ) {

        channel.transferFrom(rbc, 0, Long.MAX_VALUE);
    }
}

public static void saveUrl3(final Path file, final URL url,
    int secsConnectTimeout, int secsReadTimeout))
    throws MalformedURLException, IOException {

    // Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
    try (InputStream in = streamFromUrl(url, secsConnectTimeout,secsReadTimeout) ) {
        Files.copy(in, file, StandardCopyOption.REPLACE_EXISTING);
    }
}

public static InputStream streamFromUrl(URL url,int secsConnectTimeout,int secsReadTimeout) throws IOException {
    URLConnection conn = url.openConnection();
    if(secsConnectTimeout>0)
        conn.setConnectTimeout(secsConnectTimeout*1000);
    if(secsReadTimeout>0)
        conn.setReadTimeout(secsReadTimeout*1000);
    return conn.getInputStream();
}

我没有发现明显的差异,在我看来都是对的。它们既安全又高效。(速度的差异似乎无关紧要——我从本地服务器写入180 MB到SSD磁盘的时间大约在1.2到1.5秒之间波动)。它们不需要外部库。所有这些都可以使用任意大小和(根据我的经验)HTTP重定向。

此外,如果没有找到资源(通常是404错误),所有抛出FileNotFoundException,如果DNS解析失败则抛出java.net.UnknownHostException;其他IOException对应传输过程中的错误。

下载一个文件需要你阅读它。无论哪种方式,您都必须以某种方式查看该文件。而不是逐行,你可以从流中逐字节读取:

BufferedInputStream in = new BufferedInputStream(new URL("http://www.website.com/information.asp").openStream())
byte data[] = new byte[1024];
int count;
while((count = in.read(data, 0, 1024)) != -1)
{
    out.write(data, 0, count);
}

下面是一个简洁的、可读的、仅使用jdk的解决方案,其中包含适当的封闭资源:

static long download(String url, String fileName) throws IOException {
    try (InputStream in = URI.create(url).toURL().openStream()) {
        return Files.copy(in, Paths.get(fileName));
    }
}

两行代码,没有依赖关系。

下面是一个完整的文件下载示例程序,包含输出、错误检查和命令行参数检查:

package so.downloader;

import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.nio.file.Files;
import java.nio.file.Paths;

public class Application {
    public static void main(String[] args) throws IOException {
        if (2 != args.length) {
            System.out.println("USAGE: java -jar so-downloader.jar <source-URL> <target-filename>");
            System.exit(1);
        }

        String sourceUrl = args[0];
        String targetFilename = args[1];

        long bytesDownloaded = download(sourceUrl, targetFilename);

        System.out.println(String.format("Downloaded %d bytes from %s to %s.", bytesDownloaded, sourceUrl, targetFilename));
    }

    static long download(String url, String fileName) throws IOException {
        try (InputStream in = URI.create(url).toURL().openStream()) {
            return Files.copy(in, Paths.get(fileName));
        }
    }    
}

正如so-downloader存储库README中所指出的:

运行文件下载程序:

java -jar so-downloader.jar <source-URL> <target-filename>

例如:

java -jar so-downloader.jar https://github.com/JanStureNielsen/so-downloader/archive/main.zip so-downloader-source.zip
public class DownloadManager {

    static String urls = "[WEBSITE NAME]";

    public static void main(String[] args) throws IOException{
        URL url = verify(urls);
        HttpURLConnection connection = (HttpURLConnection) url.openConnection();
        InputStream in = null;
        String filename = url.getFile();
        filename = filename.substring(filename.lastIndexOf('/') + 1);
        FileOutputStream out = new FileOutputStream("C:\\Java2_programiranje/Network/DownloadTest1/Project/Output" + File.separator + filename);
        in = connection.getInputStream();
        int read = -1;
        byte[] buffer = new byte[4096];
        while((read = in.read(buffer)) != -1){
            out.write(buffer, 0, read);
            System.out.println("[SYSTEM/INFO]: Downloading file...");
        }
        in.close();
        out.close();
        System.out.println("[SYSTEM/INFO]: File Downloaded!");
    }
    private static URL verify(String url){
        if(!url.toLowerCase().startsWith("http://")) {
            return null;
        }
        URL verifyUrl = null;

        try{
            verifyUrl = new URL(url);
        }catch(Exception e){
            e.printStackTrace();
        }
        return verifyUrl;
    }
}

简单使用有一个问题:

org.apache.commons.io.FileUtils.copyURLToFile(URL, File)

如果你需要下载和保存非常大的文件,或者在一般情况下,如果你需要自动重试以防连接断开。

在这种情况下,我建议使用Apache HttpClient以及org.apache.commons.io.FileUtils。例如:

GetMethod method = new GetMethod(resource_url);
try {
    int statusCode = client.executeMethod(method);
    if (statusCode != HttpStatus.SC_OK) {
        logger.error("Get method failed: " + method.getStatusLine());
    }
    org.apache.commons.io.FileUtils.copyInputStreamToFile(
        method.getResponseBodyAsStream(), new File(resource_file));
    } catch (HttpException e) {
        e.printStackTrace();
    } catch (IOException e) {
        e.printStackTrace();
    } finally {
    method.releaseConnection();
}