有一个在线文件(如http://www.example.com/information.asp),我需要抓取并保存到一个目录。我知道有几种逐行抓取和读取在线文件(url)的方法,但是否有一种方法可以使用Java下载并保存文件?
当前回答
public void saveUrl(final String filename, final String urlString)
throws MalformedURLException, IOException {
BufferedInputStream in = null;
FileOutputStream fout = null;
try {
in = new BufferedInputStream(new URL(urlString).openStream());
fout = new FileOutputStream(filename);
final byte data[] = new byte[1024];
int count;
while ((count = in.read(data, 0, 1024)) != -1) {
fout.write(data, 0, count);
}
} finally {
if (in != null) {
in.close();
}
if (fout != null) {
fout.close();
}
}
}
您将需要处理异常,可能是该方法的外部异常。
其他回答
在java.net.http.HttpClient上使用授权的解决方案:
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.GET()
.header("Accept", "application/json")
// .header("Authorization", "Basic ci5raG9kemhhZXY6NDdiYdfjlmNUM=") if you need
.uri(URI.create("https://jira.google.ru/secure/attachment/234096/screenshot-1.png"))
.build();
HttpResponse<InputStream> response = client.send(request, HttpResponse.BodyHandlers.ofInputStream());
try (InputStream in = response.body()) {
Files.copy(in, Paths.get(target + "screenshot-1.png"), StandardCopyOption.REPLACE_EXISTING);
}
更简单的非阻塞I/O用法:
URL website = new URL("http://www.website.com/information.asp");
try (InputStream in = website.openStream()) {
Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING);
}
下面是一个简洁的、可读的、仅使用jdk的解决方案,其中包含适当的封闭资源:
static long download(String url, String fileName) throws IOException {
try (InputStream in = URI.create(url).toURL().openStream()) {
return Files.copy(in, Paths.get(fileName));
}
}
两行代码,没有依赖关系。
下面是一个完整的文件下载示例程序,包含输出、错误检查和命令行参数检查:
package so.downloader;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Application {
public static void main(String[] args) throws IOException {
if (2 != args.length) {
System.out.println("USAGE: java -jar so-downloader.jar <source-URL> <target-filename>");
System.exit(1);
}
String sourceUrl = args[0];
String targetFilename = args[1];
long bytesDownloaded = download(sourceUrl, targetFilename);
System.out.println(String.format("Downloaded %d bytes from %s to %s.", bytesDownloaded, sourceUrl, targetFilename));
}
static long download(String url, String fileName) throws IOException {
try (InputStream in = URI.create(url).toURL().openStream()) {
return Files.copy(in, Paths.get(fileName));
}
}
}
正如so-downloader存储库README中所指出的:
运行文件下载程序:
java -jar so-downloader.jar <source-URL> <target-filename>
例如:
java -jar so-downloader.jar https://github.com/JanStureNielsen/so-downloader/archive/main.zip so-downloader-source.zip
试试Java NIO:
URL website = new URL("http://www.website.com/information.asp");
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream("information.html");
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
使用transferFrom()可能比从源通道读取并写入此通道的简单循环更有效。许多操作系统可以直接将字节从源通道传输到文件系统缓存中,而不需要实际复制它们。
点击这里查看更多信息。
注意:transferFrom中的第三个参数是传输的最大字节数。整数。MAX_VALUE将传输最多2^31字节,长。MAX_VALUE最多允许2^63字节(比现有的任何文件都大)。
这里有许多优雅而有效的答案。但是简洁会让我们失去一些有用的信息。特别是,人们通常不希望将连接错误视为异常,并且可能希望以不同的方式处理某些与网络相关的错误—例如,决定是否应该重试下载。
下面是一个方法,它不会为网络错误抛出异常(仅用于真正异常的问题,如url格式错误或写入文件的问题)
/**
* Downloads from a (http/https) URL and saves to a file.
* Does not consider a connection error an Exception. Instead it returns:
*
* 0=ok
* 1=connection interrupted, timeout (but something was read)
* 2=not found (FileNotFoundException) (404)
* 3=server error (500...)
* 4=could not connect: connection timeout (no internet?) java.net.SocketTimeoutException
* 5=could not connect: (server down?) java.net.ConnectException
* 6=could not resolve host (bad host, or no internet - no dns)
*
* @param file File to write. Parent directory will be created if necessary
* @param url http/https url to connect
* @param secsConnectTimeout Seconds to wait for connection establishment
* @param secsReadTimeout Read timeout in seconds - trasmission will abort if it freezes more than this
* @return See above
* @throws IOException Only if URL is malformed or if could not create the file
*/
public static int saveUrl(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout) throws IOException {
Files.createDirectories(file.getParent()); // make sure parent dir exists , this can throw exception
URLConnection conn = url.openConnection(); // can throw exception if bad url
if( secsConnectTimeout > 0 ) conn.setConnectTimeout(secsConnectTimeout * 1000);
if( secsReadTimeout > 0 ) conn.setReadTimeout(secsReadTimeout * 1000);
int ret = 0;
boolean somethingRead = false;
try (InputStream is = conn.getInputStream()) {
try (BufferedInputStream in = new BufferedInputStream(is); OutputStream fout = Files
.newOutputStream(file)) {
final byte data[] = new byte[8192];
int count;
while((count = in.read(data)) > 0) {
somethingRead = true;
fout.write(data, 0, count);
}
}
} catch(java.io.IOException e) {
int httpcode = 999;
try {
httpcode = ((HttpURLConnection) conn).getResponseCode();
} catch(Exception ee) {}
if( somethingRead && e instanceof java.net.SocketTimeoutException ) ret = 1;
else if( e instanceof FileNotFoundException && httpcode >= 400 && httpcode < 500 ) ret = 2;
else if( httpcode >= 400 && httpcode < 600 ) ret = 3;
else if( e instanceof java.net.SocketTimeoutException ) ret = 4;
else if( e instanceof java.net.ConnectException ) ret = 5;
else if( e instanceof java.net.UnknownHostException ) ret = 6;
else throw e;
}
return ret;
}
推荐文章
- 在流中使用Java 8 foreach循环移动到下一项
- 访问限制:'Application'类型不是API(必需库rt.jar的限制)
- 用Java计算两个日期之间的天数
- 如何配置slf4j-simple
- 在Jar文件中运行类
- 带参数的可运行?
- 我如何得到一个字符串的前n个字符而不检查大小或出界?
- 我可以在Java中设置enum起始值吗?
- Java中的回调函数
- c#和Java中的泛型有什么不同?和模板在c++ ?
- 在Java中,流相对于循环的优势是什么?
- Jersey在未找到InjectionManagerFactory时停止工作
- 在Java流是peek真的只是调试?
- Recyclerview不调用onCreateViewHolder
- 将JSON字符串转换为HashMap