我有一个小工具,我用来从一个网站上下载一个MP3文件,然后构建/更新一个播客XML文件,我已经添加到iTunes。

创建/更新XML文件的文本处理是用Python编写的。但是,我在Windows .bat文件中使用wget来下载实际的MP3文件。我更喜欢用Python编写整个实用程序。

我努力寻找一种用Python实际下载该文件的方法,因此我使用了wget。

那么,如何使用Python下载文件呢?


当前回答

使用5行Python请求

import requests as req

remote_url = 'http://www.example.com/sound.mp3'
local_file_name = 'sound.mp3'

data = req.get(remote_url)

# Save file data to local copy
with open(local_file_name, 'wb')as file:
    file.write(data.content)

现在对远程文件的本地副本执行一些操作

其他回答

我想从网页上下载所有的文件。我尝试了wget,但它失败了,所以我决定使用Python路由,我找到了这个线程。

读完之后,我做了一个小的命令行应用程序,soupget,扩展了PabloG和Stan的优秀答案,并添加了一些有用的选项。

它使用BeatifulSoup收集页面的所有url,然后下载具有所需扩展名的url。最后,它可以同时下载多个文件。

下面就是:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import (division, absolute_import, print_function, unicode_literals)
import sys, os, argparse
from bs4 import BeautifulSoup

# --- insert Stan's script here ---
# if sys.version_info >= (3,): 
#...
#...
# def download_file(url, dest=None): 
#...
#...

# --- new stuff ---
def collect_all_url(page_url, extensions):
    """
    Recovers all links in page_url checking for all the desired extensions
    """
    conn = urllib2.urlopen(page_url)
    html = conn.read()
    soup = BeautifulSoup(html, 'lxml')
    links = soup.find_all('a')

    results = []    
    for tag in links:
        link = tag.get('href', None)
        if link is not None: 
            for e in extensions:
                if e in link:
                    # Fallback for badly defined links
                    # checks for missing scheme or netloc
                    if bool(urlparse.urlparse(link).scheme) and bool(urlparse.urlparse(link).netloc):
                        results.append(link)
                    else:
                        new_url=urlparse.urljoin(page_url,link)                        
                        results.append(new_url)
    return results

if __name__ == "__main__":  # Only run if this file is called directly
    # Command line arguments
    parser = argparse.ArgumentParser(
        description='Download all files from a webpage.')
    parser.add_argument(
        '-u', '--url', 
        help='Page url to request')
    parser.add_argument(
        '-e', '--ext', 
        nargs='+',
        help='Extension(s) to find')    
    parser.add_argument(
        '-d', '--dest', 
        default=None,
        help='Destination where to save the files')
    parser.add_argument(
        '-p', '--par', 
        action='store_true', default=False, 
        help="Turns on parallel download")
    args = parser.parse_args()

    # Recover files to download
    all_links = collect_all_url(args.url, args.ext)

    # Download
    if not args.par:
        for l in all_links:
            try:
                filename = download_file(l, args.dest)
                print(l)
            except Exception as e:
                print("Error while downloading: {}".format(e))
    else:
        from multiprocessing.pool import ThreadPool
        results = ThreadPool(10).imap_unordered(
            lambda x: download_file(x, args.dest), all_links)
        for p in results:
            print(p)

它的用法示例如下:

python3 soupget.py -p -e <list of extensions> -d <destination_folder> -u <target_webpage>

还有一个实际的例子,如果你想看到它的作用:

python3 soupget.py -p -e .xlsx .pdf .csv -u https://healthdata.gov/dataset/chemicals-cosmetics

使用5行Python请求

import requests as req

remote_url = 'http://www.example.com/sound.mp3'
local_file_name = 'sound.mp3'

data = req.get(remote_url)

# Save file data to local copy
with open(local_file_name, 'wb')as file:
    file.write(data.content)

现在对远程文件的本地副本执行一些操作

你也可以通过urlretrieve得到进度反馈:

def report(blocknr, blocksize, size):
    current = blocknr*blocksize
    sys.stdout.write("\r{0:.2f}%".format(100.0*current/size))

def downloadFile(url):
    print "\n",url
    fname = url.split('/')[-1]
    print fname
    urllib.urlretrieve(url, fname, report)

源代码可以是:

import urllib
sock = urllib.urlopen("http://diveintopython.org/")
htmlSource = sock.read()                            
sock.close()                                        
print htmlSource  

还有一个,使用urlretrieve:

import urllib.request
urllib.request.urlretrieve("http://www.example.com/songs/mp3.mp3", "mp3.mp3")

(对于Python 2使用import urllib和urllib.urlretrieve)