如果你想获取一个网页的内容到一个变量中,只需要读取urllib.request.urlopen的响应:
import urllib.request
...
url = 'http://example.com/'
response = urllib.request.urlopen(url)
data = response.read() # a `bytes` object
text = data.decode('utf-8') # a `str`; this step can't be used if data is binary
下载和保存文件最简单的方法是使用urllib.request.urlretrieve函数:
import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve(url, file_name)
import urllib.request
...
# Download the file from `url`, save it in a temporary directory and get the
# path to it (e.g. '/tmp/tmpb48zma.txt') in the `file_name` variable:
file_name, headers = urllib.request.urlretrieve(url)
但是请记住,urlretrieve被认为是遗留的,可能会被弃用(但不确定原因)。
因此,最正确的方法是使用urllib.request.urlopen函数返回一个表示HTTP响应的类文件对象,并使用shutil.copyfileobj将其复制到实际文件中。
import urllib.request
import shutil
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
shutil.copyfileobj(response, out_file)
如果这看起来太复杂,您可能想要更简单一点,将整个下载存储在一个bytes对象中,然后将其写入一个文件。但这只适用于小文件。
import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
data = response.read() # a `bytes` object
out_file.write(data)
可以动态地提取.gz(或者其他格式)压缩数据,但是这样的操作可能需要HTTP服务器支持对文件的随机访问。
import urllib.request
import gzip
...
# Read the first 64 bytes of the file inside the .gz archive located at `url`
url = 'http://example.com/something.gz'
with urllib.request.urlopen(url) as response:
with gzip.GzipFile(fileobj=response) as uncompressed:
file_header = uncompressed.read(64) # a `bytes` object
# Or do anything shown above using `uncompressed` instead of `response`.