我试图使用python的请求模块从网络下载并保存一张图像。

下面是我使用的(工作)代码:

img = urllib2.urlopen(settings.STATICMAP_URL.format(**data))
with open(path, 'w') as f:
    f.write(img.read())

下面是使用请求的新(无效)代码:

r = requests.get(settings.STATICMAP_URL.format(**data))
if r.status_code == 200:
    img = r.raw.read()
    with open(path, 'w') as f:
        f.write(img)

你能帮我从请求中使用响应的什么属性吗?


当前回答

主要有两种方式:

Using .content (simplest/official) (see Zhenyi Zhang's answer): import io # Note: io.BytesIO is StringIO.StringIO on Python2. import requests r = requests.get('http://lorempixel.com/400/200') r.raise_for_status() with io.BytesIO(r.content) as f: with Image.open(f) as img: img.show() Using .raw (see Martijn Pieters's answer): import requests r = requests.get('http://lorempixel.com/400/200', stream=True) r.raise_for_status() r.raw.decode_content = True # Required to decompress gzip/deflate compressed responses. with PIL.Image.open(r.raw) as img: img.show() r.close() # Safety when stream=True ensure the connection is released.

计时两者无明显差异。

其他回答

我将发布一个答案,因为我没有足够的代表来发表评论,但使用Blairg23发布的wget,您还可以为路径提供一个out参数。

 wget.download(url, out=path)

你可以使用响应。原始文件对象,或遍历响应。

使用响应。默认情况下,raw类文件对象不会解码压缩后的响应(使用GZIP或deflate)。您可以通过将decode_content属性设置为True(请求将其设置为False以控制解码本身)来强制它为您解压缩。然后,您可以使用shutil.copyfileobj()让Python将数据流传输到文件对象:

import requests
import shutil

r = requests.get(settings.STATICMAP_URL.format(**data), stream=True)
if r.status_code == 200:
    with open(path, 'wb') as f:
        r.raw.decode_content = True
        shutil.copyfileobj(r.raw, f)        

要遍历响应,请使用循环;这样的迭代确保数据在此阶段解压缩:

r = requests.get(settings.STATICMAP_URL.format(**data), stream=True)
if r.status_code == 200:
    with open(path, 'wb') as f:
        for chunk in r:
            f.write(chunk)

这将读取128字节的数据块;如果你觉得另一个块大小更好,使用Response.iter_content()方法自定义块大小:

r = requests.get(settings.STATICMAP_URL.format(**data), stream=True)
if r.status_code == 200:
    with open(path, 'wb') as f:
        for chunk in r.iter_content(1024):
            f.write(chunk)

注意,您需要以二进制模式打开目标文件,以确保python不会尝试为您翻译换行符。我们还设置stream=True,这样请求就不会先把整个图像下载到内存中。

主要有两种方式:

Using .content (simplest/official) (see Zhenyi Zhang's answer): import io # Note: io.BytesIO is StringIO.StringIO on Python2. import requests r = requests.get('http://lorempixel.com/400/200') r.raise_for_status() with io.BytesIO(r.content) as f: with Image.open(f) as img: img.show() Using .raw (see Martijn Pieters's answer): import requests r = requests.get('http://lorempixel.com/400/200', stream=True) r.raise_for_status() r.raw.decode_content = True # Required to decompress gzip/deflate compressed responses. with PIL.Image.open(r.raw) as img: img.show() r.close() # Safety when stream=True ensure the connection is released.

计时两者无明显差异。

从请求中获取一个类似文件的对象,并将其复制到文件中。这也将避免将整个内容一次性读入内存。

import shutil

import requests

url = 'http://example.com/img.png'
response = requests.get(url, stream=True)
with open('img.png', 'wb') as out_file:
    shutil.copyfileobj(response.raw, out_file)
del response

这是一个非常简单的代码

import requests

response = requests.get("https://i.imgur.com/ExdKOOz.png") ## Making a variable to get image.

file = open("sample_image.png", "wb") ## Creates the file for image
file.write(response.content) ## Saves file content
file.close()