Requests是一个非常好的库。我想用它来下载大文件(>1GB)。
问题是不可能将整个文件保存在内存中;我要分大块读。这是以下代码的一个问题:
import requests
def DownloadFile(url)
local_filename = url.split('/')[-1]
r = requests.get(url)
f = open(local_filename, 'wb')
for chunk in r.iter_content(chunk_size=512 * 1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.close()
return
出于某种原因,它不是这样工作的;在将响应保存到文件之前,它仍然将响应加载到内存中。
您的块大小可能太大了,您是否尝试过删除它-可能一次1024字节?(同时,你可以使用with来整理语法)
def DownloadFile(url):
local_filename = url.split('/')[-1]
r = requests.get(url)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
return
顺便提一下,您如何推断响应已加载到内存中?
听起来好像python没有将数据刷新到文件中,从其他SO问题中,您可以尝试f.flush()和os.fsync()来强制写入文件并释放内存;
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
os.fsync(f.fileno())
请求是好的,但是套接字解决方案呢?
def stream_(host):
import socket
import ssl
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
context = ssl.create_default_context(Purpose.CLIENT_AUTH)
with context.wrap_socket(sock, server_hostname=host) as wrapped_socket:
wrapped_socket.connect((socket.gethostbyname(host), 443))
wrapped_socket.send(
"GET / HTTP/1.1\r\nHost:thiscatdoesnotexist.com\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9\r\n\r\n".encode())
resp = b""
while resp[-4:-1] != b"\r\n\r":
resp += wrapped_socket.recv(1)
else:
resp = resp.decode()
content_length = int("".join([tag.split(" ")[1] for tag in resp.split("\r\n") if "content-length" in tag.lower()]))
image = b""
while content_length > 0:
data = wrapped_socket.recv(2048)
if not data:
print("EOF")
break
image += data
content_length -= len(data)
with open("image.jpeg", "wb") as file:
file.write(image)
使用以下流代码,无论下载的文件大小如何,Python内存使用都会受到限制:
def download_file(url):
local_filename = url.split('/')[-1]
# NOTE the stream=True parameter below
with requests.get(url, stream=True) as r:
r.raise_for_status()
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
# If you have chunk encoded response uncomment if
# and set chunk_size parameter to None.
#if chunk:
f.write(chunk)
return local_filename
注意,使用iter_content返回的字节数并不完全是chunk_size;它通常是一个大得多的随机数,并且在每次迭代中都是不同的。
参见body-content-workflow和Response。Iter_content供进一步参考。
您的块大小可能太大了,您是否尝试过删除它-可能一次1024字节?(同时,你可以使用with来整理语法)
def DownloadFile(url):
local_filename = url.split('/')[-1]
r = requests.get(url)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
return
顺便提一下,您如何推断响应已加载到内存中?
听起来好像python没有将数据刷新到文件中,从其他SO问题中,您可以尝试f.flush()和os.fsync()来强制写入文件并释放内存;
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
os.fsync(f.fileno())
如果使用响应,就简单多了。Raw和shutil.copyfileobj():
import requests
import shutil
def download_file(url):
local_filename = url.split('/')[-1]
with requests.get(url, stream=True) as r:
with open(local_filename, 'wb') as f:
shutil.copyfileobj(r.raw, f)
return local_filename
这样可以在不使用过多内存的情况下将文件流到磁盘,而且代码很简单。
注:根据文档,响应。Raw不会解码gzip和压缩传输编码,因此您需要手动执行此操作。