所以我试图制作一个Python脚本,下载网络漫画,并将它们放在我桌面上的文件夹中。我在这里找到了一些类似的程序,它们的功能类似,但都不是我所需要的。我发现最相似的一个在这里(http://bytes.com/topic/python/answers/850927-problem-using-urllib-download-images)。我尝试使用以下代码:

>>> import urllib
>>> image = urllib.URLopener()
>>> image.retrieve("http://www.gunnerkrigg.com//comics/00000001.jpg","00000001.jpg")
('00000001.jpg', <httplib.HTTPMessage instance at 0x1457a80>)

I then searched my computer for a file "00000001.jpg", but all I found was the cached picture of it. I'm not even sure it saved the file to my computer. Once I understand how to get the file downloaded, I think I know how to handle the rest. Essentially just use a for loop and split the string at the '00000000'.'jpg' and increment the '00000000' up to the largest number, which I would have to somehow determine. Any reccomendations on the best way to do this or how to download the file correctly?

谢谢!

编辑6/15/10

这是完成的脚本,它将文件保存到您选择的任何目录。出于某种奇怪的原因,文件没有下载,他们就下载了。任何关于如何清理它的建议都将非常感激。我目前正在研究如何找到许多漫画存在于网站上,这样我就可以得到最新的一个,而不是有程序退出后,一定数量的异常被提出。

import urllib
import os

comicCounter=len(os.listdir('/file'))+1  # reads the number of files in the folder to start downloading at the next comic
errorCount=0

def download_comic(url,comicName):
    """
    download a comic in the form of

    url = http://www.example.com
    comicName = '00000000.jpg'
    """
    image=urllib.URLopener()
    image.retrieve(url,comicName)  # download comicName at URL

while comicCounter <= 1000:  # not the most elegant solution
    os.chdir('/file')  # set where files download to
        try:
        if comicCounter < 10:  # needed to break into 10^n segments because comic names are a set of zeros followed by a number
            comicNumber=str('0000000'+str(comicCounter))  # string containing the eight digit comic number
            comicName=str(comicNumber+".jpg")  # string containing the file name
            url=str("http://www.gunnerkrigg.com//comics/"+comicName)  # creates the URL for the comic
            comicCounter+=1  # increments the comic counter to go to the next comic, must be before the download in case the download raises an exception
            download_comic(url,comicName)  # uses the function defined above to download the comic
            print url
        if 10 <= comicCounter < 100:
            comicNumber=str('000000'+str(comicCounter))
            comicName=str(comicNumber+".jpg")
            url=str("http://www.gunnerkrigg.com//comics/"+comicName)
            comicCounter+=1
            download_comic(url,comicName)
            print url
        if 100 <= comicCounter < 1000:
            comicNumber=str('00000'+str(comicCounter))
            comicName=str(comicNumber+".jpg")
            url=str("http://www.gunnerkrigg.com//comics/"+comicName)
            comicCounter+=1
            download_comic(url,comicName)
            print url
        else:  # quit the program if any number outside this range shows up
            quit
    except IOError:  # urllib raises an IOError for a 404 error, when the comic doesn't exist
        errorCount+=1  # add one to the error count
        if errorCount>3:  # if more than three errors occur during downloading, quit the program
            break
        else:
            print str("comic"+ ' ' + str(comicCounter) + ' ' + "does not exist")  # otherwise say that the certain comic number doesn't exist
print "all comics are up to date"  # prints if all comics are downloaded

最简单的方法是使用.read()读取部分或整个响应,然后将其写入您已经在已知的好位置打开的文件中。


除了建议您仔细阅读retrieve()的文档(http://docs.python.org/library/urllib.html#urllib.URLopener.retrieve)之外,我还建议您对响应的内容实际调用read(),然后将其保存到您选择的文件中,而不是将其保存在retrieve创建的临时文件中。


Python 2:

import urllib
f = open('00000001.jpg','wb')
f.write(urllib.urlopen('http://www.gunnerkrigg.com//comics/00000001.jpg').read())
f.close()

Python 3:

import urllib.request
f = open('00000001.jpg','wb')
f.write(urllib.request.urlopen('http://www.gunnerkrigg.com//comics/00000001.jpg').read())
f.close()

Python 2

使用urllib.urlretrieve

import urllib
urllib.urlretrieve("http://www.gunnerkrigg.com//comics/00000001.jpg", "00000001.jpg")

Python 3

使用urllib.request.urlretrieve (Python 3遗留接口的一部分,工作原理完全相同)

import urllib.request
urllib.request.urlretrieve("http://www.gunnerkrigg.com//comics/00000001.jpg", "00000001.jpg")

只是为了记录,使用请求库。

import requests
f = open('00000001.jpg','wb')
f.write(requests.get('http://www.gunnerkrigg.com//comics/00000001.jpg').content)
f.close()

尽管它应该检查requests.get()错误。


我已经找到了这个答案,并以更可靠的方式编辑它

def download_photo(self, img_url, filename):
    try:
        image_on_web = urllib.urlopen(img_url)
        if image_on_web.headers.maintype == 'image':
            buf = image_on_web.read()
            path = os.getcwd() + DOWNLOADED_IMAGE_PATH
            file_path = "%s%s" % (path, filename)
            downloaded_image = file(file_path, "wb")
            downloaded_image.write(buf)
            downloaded_image.close()
            image_on_web.close()
        else:
            return False    
    except:
        return False
    return True

因此,在下载时,您永远不会获得任何其他资源或异常。


Python 3版@DiGMi的回答:

from urllib import request
f = open('00000001.jpg', 'wb')
f.write(request.urlopen("http://www.gunnerkrigg.com/comics/00000001.jpg").read())
f.close()

也许你需要'User-Agent':

import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.137 Safari/537.36')]
response = opener.open('http://google.com')
htmlData = response.read()
f = open('file.txt','w')
f.write(htmlData )
f.close()

所有上述代码,不允许保留原始图像名称,这有时是必需的。 这将有助于将映像保存到本地驱动器,保留原始映像名称

    IMAGE = URL.rsplit('/',1)[1]
    urllib.urlretrieve(URL, IMAGE)

试着了解更多细节。


那么这个呢:

import urllib, os

def from_url( url, filename = None ):
    '''Store the url content to filename'''
    if not filename:
        filename = os.path.basename( os.path.realpath(url) )

    req = urllib.request.Request( url )
    try:
        response = urllib.request.urlopen( req )
    except urllib.error.URLError as e:
        if hasattr( e, 'reason' ):
            print( 'Fail in reaching the server -> ', e.reason )
            return False
        elif hasattr( e, 'code' ):
            print( 'The server couldn\'t fulfill the request -> ', e.code )
            return False
    else:
        with open( filename, 'wb' ) as fo:
            fo.write( response.read() )
            print( 'Url saved as %s' % filename )
        return True

##

def main():
    test_url = 'http://cdn.sstatic.net/stackoverflow/img/favicon.ico'

    from_url( test_url )

if __name__ == '__main__':
    main()

如果您知道这些文件位于该网站的同一目录下,并且具有以下格式:filename_01.jpg,…, filename_10.jpg,然后下载所有文件:

import requests

for x in range(1, 10):
    str1 = 'filename_%2.2d.jpg' % (x)
    str2 = 'http://site/dir/filename_%2.2d.jpg' % (x)

    f = open(str1, 'wb')
    f.write(requests.get(str2).content)
    f.close()

一个简单的解决方案可能是(python 3):

import urllib.request
import os
os.chdir("D:\\comic") #your path
i=1;
s="00000000"
while i<1000:
    try:
        urllib.request.urlretrieve("http://www.gunnerkrigg.com//comics/"+ s[:8-len(str(i))]+ str(i)+".jpg",str(i)+".jpg")
    except:
        print("not possible" + str(i))
    i+=1;

对于Python 3,你需要导入import urllib.request:

import urllib.request 

urllib.request.urlretrieve(url, filename)

更多信息请点击链接


使用python3,这对我来说是有效的。

它从csv文件中获取一个url列表,并开始将它们下载到一个文件夹中。如果内容或图像不存在,它会采取例外情况,并继续发挥它的魔力。

import urllib.request
import csv
import os

errorCount=0

file_list = "/Users/$USER/Desktop/YOUR-FILE-TO-DOWNLOAD-IMAGES/image_{0}.jpg"

# CSV file must separate by commas
# urls.csv is set to your current working directory make sure your cd into or add the corresponding path
with open ('urls.csv') as images:
    images = csv.reader(images)
    img_count = 1
    print("Please Wait.. it will take some time")
    for image in images:
        try:
            urllib.request.urlretrieve(image[0],
            file_list.format(img_count))
            img_count += 1
        except IOError:
            errorCount+=1
            # Stop in case you reach 100 errors downloading images
            if errorCount>100:
                break
            else:
                print ("File does not exist")

print ("Done!")

如果你需要代理支持,你可以这样做:

  if needProxy == False:
    returnCode, urlReturnResponse = urllib.urlretrieve( myUrl, fullJpegPathAndName )
  else:
    proxy_support = urllib2.ProxyHandler({"https":myHttpProxyAddress})
    opener = urllib2.build_opener(proxy_support)
    urllib2.install_opener(opener)
    urlReader = urllib2.urlopen( myUrl ).read() 
    with open( fullJpegPathAndName, "w" ) as f:
      f.write( urlReader )

另一种方法是通过fastai库。这对我来说就像一个魔法。我面对一个SSL: CERTIFICATE_VERIFY_FAILED错误使用urlretrieve,所以我尝试了。

url = 'https://www.linkdoesntexist.com/lennon.jpg'
fastai.core.download_url(url,'image1.jpg', show_progress=False)

使用请求

import requests
import shutil,os

headers = {
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'
}
currentDir = os.getcwd()
path = os.path.join(currentDir,'Images')#saving images to Images folder

def ImageDl(url):
    attempts = 0
    while attempts < 5:#retry 5 times
        try:
            filename = url.split('/')[-1]
            r = requests.get(url,headers=headers,stream=True,timeout=5)
            if r.status_code == 200:
                with open(os.path.join(path,filename),'wb') as f:
                    r.raw.decode_content = True
                    shutil.copyfileobj(r.raw,f)
            print(filename)
            break
        except Exception as e:
            attempts+=1
            print(e)

if __name__ == '__main__':
    ImageDl(url)

使用urllib,您可以立即完成这个任务。

import urllib.request

opener=urllib.request.build_opener()
opener.addheaders=[('User-Agent','Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1941.0 Safari/537.36')]
urllib.request.install_opener(opener)

urllib.request.urlretrieve(URL, "images/0.jpg")

根据urllib.request.urlretrieve - Python 3.9.2文档,该函数从Python 2模块urllib(而不是urllib2)移植而来。在将来的某个时候,它可能会被弃用。

因此,使用请求可能会更好。get(url, params=None, **kwargs)。这是MWE。

import requests
 
url = 'http://example.com/example.jpg'

response = requests.get(url)

with open(filename, "wb") as f:
    f.write(response.content)

参考下载谷歌的WebP图像通过硒WebDriver截图。


如果你想下载类似网站目录结构的图片,你可以这样做:

    result_path = './result/'
    soup = BeautifulSoup(self.file, 'css.parser')
    for image in soup.findAll("img"):
        image["name"] = image["src"].split("/")[-1]
        image['path'] = image["src"].replace(image["name"], '')
        os.makedirs(result_path + image['path'], exist_ok=True)
        if image["src"].lower().startswith("http"):
            urlretrieve(image["src"], result_path + image["src"][1:])
        else:
            urlretrieve(url + image["src"], result_path + image["src"][1:])