所以我试图制作一个Python脚本,下载网络漫画,并将它们放在我桌面上的文件夹中。我在这里找到了一些类似的程序,它们的功能类似,但都不是我所需要的。我发现最相似的一个在这里(http://bytes.com/topic/python/answers/850927-problem-using-urllib-download-images)。我尝试使用以下代码:

>>> import urllib
>>> image = urllib.URLopener()
>>> image.retrieve("http://www.gunnerkrigg.com//comics/00000001.jpg","00000001.jpg")
('00000001.jpg', <httplib.HTTPMessage instance at 0x1457a80>)

I then searched my computer for a file "00000001.jpg", but all I found was the cached picture of it. I'm not even sure it saved the file to my computer. Once I understand how to get the file downloaded, I think I know how to handle the rest. Essentially just use a for loop and split the string at the '00000000'.'jpg' and increment the '00000000' up to the largest number, which I would have to somehow determine. Any reccomendations on the best way to do this or how to download the file correctly?

谢谢!

编辑6/15/10

这是完成的脚本,它将文件保存到您选择的任何目录。出于某种奇怪的原因,文件没有下载,他们就下载了。任何关于如何清理它的建议都将非常感激。我目前正在研究如何找到许多漫画存在于网站上,这样我就可以得到最新的一个,而不是有程序退出后,一定数量的异常被提出。

import urllib
import os

comicCounter=len(os.listdir('/file'))+1  # reads the number of files in the folder to start downloading at the next comic
errorCount=0

def download_comic(url,comicName):
    """
    download a comic in the form of

    url = http://www.example.com
    comicName = '00000000.jpg'
    """
    image=urllib.URLopener()
    image.retrieve(url,comicName)  # download comicName at URL

while comicCounter <= 1000:  # not the most elegant solution
    os.chdir('/file')  # set where files download to
        try:
        if comicCounter < 10:  # needed to break into 10^n segments because comic names are a set of zeros followed by a number
            comicNumber=str('0000000'+str(comicCounter))  # string containing the eight digit comic number
            comicName=str(comicNumber+".jpg")  # string containing the file name
            url=str("http://www.gunnerkrigg.com//comics/"+comicName)  # creates the URL for the comic
            comicCounter+=1  # increments the comic counter to go to the next comic, must be before the download in case the download raises an exception
            download_comic(url,comicName)  # uses the function defined above to download the comic
            print url
        if 10 <= comicCounter < 100:
            comicNumber=str('000000'+str(comicCounter))
            comicName=str(comicNumber+".jpg")
            url=str("http://www.gunnerkrigg.com//comics/"+comicName)
            comicCounter+=1
            download_comic(url,comicName)
            print url
        if 100 <= comicCounter < 1000:
            comicNumber=str('00000'+str(comicCounter))
            comicName=str(comicNumber+".jpg")
            url=str("http://www.gunnerkrigg.com//comics/"+comicName)
            comicCounter+=1
            download_comic(url,comicName)
            print url
        else:  # quit the program if any number outside this range shows up
            quit
    except IOError:  # urllib raises an IOError for a 404 error, when the comic doesn't exist
        errorCount+=1  # add one to the error count
        if errorCount>3:  # if more than three errors occur during downloading, quit the program
            break
        else:
            print str("comic"+ ' ' + str(comicCounter) + ' ' + "does not exist")  # otherwise say that the certain comic number doesn't exist
print "all comics are up to date"  # prints if all comics are downloaded

当前回答

如果你需要代理支持,你可以这样做:

  if needProxy == False:
    returnCode, urlReturnResponse = urllib.urlretrieve( myUrl, fullJpegPathAndName )
  else:
    proxy_support = urllib2.ProxyHandler({"https":myHttpProxyAddress})
    opener = urllib2.build_opener(proxy_support)
    urllib2.install_opener(opener)
    urlReader = urllib2.urlopen( myUrl ).read() 
    with open( fullJpegPathAndName, "w" ) as f:
      f.write( urlReader )

其他回答

使用python3,这对我来说是有效的。

它从csv文件中获取一个url列表,并开始将它们下载到一个文件夹中。如果内容或图像不存在,它会采取例外情况,并继续发挥它的魔力。

import urllib.request
import csv
import os

errorCount=0

file_list = "/Users/$USER/Desktop/YOUR-FILE-TO-DOWNLOAD-IMAGES/image_{0}.jpg"

# CSV file must separate by commas
# urls.csv is set to your current working directory make sure your cd into or add the corresponding path
with open ('urls.csv') as images:
    images = csv.reader(images)
    img_count = 1
    print("Please Wait.. it will take some time")
    for image in images:
        try:
            urllib.request.urlretrieve(image[0],
            file_list.format(img_count))
            img_count += 1
        except IOError:
            errorCount+=1
            # Stop in case you reach 100 errors downloading images
            if errorCount>100:
                break
            else:
                print ("File does not exist")

print ("Done!")

如果你需要代理支持,你可以这样做:

  if needProxy == False:
    returnCode, urlReturnResponse = urllib.urlretrieve( myUrl, fullJpegPathAndName )
  else:
    proxy_support = urllib2.ProxyHandler({"https":myHttpProxyAddress})
    opener = urllib2.build_opener(proxy_support)
    urllib2.install_opener(opener)
    urlReader = urllib2.urlopen( myUrl ).read() 
    with open( fullJpegPathAndName, "w" ) as f:
      f.write( urlReader )

如果你想下载类似网站目录结构的图片,你可以这样做:

    result_path = './result/'
    soup = BeautifulSoup(self.file, 'css.parser')
    for image in soup.findAll("img"):
        image["name"] = image["src"].split("/")[-1]
        image['path'] = image["src"].replace(image["name"], '')
        os.makedirs(result_path + image['path'], exist_ok=True)
        if image["src"].lower().startswith("http"):
            urlretrieve(image["src"], result_path + image["src"][1:])
        else:
            urlretrieve(url + image["src"], result_path + image["src"][1:])

那么这个呢:

import urllib, os

def from_url( url, filename = None ):
    '''Store the url content to filename'''
    if not filename:
        filename = os.path.basename( os.path.realpath(url) )

    req = urllib.request.Request( url )
    try:
        response = urllib.request.urlopen( req )
    except urllib.error.URLError as e:
        if hasattr( e, 'reason' ):
            print( 'Fail in reaching the server -> ', e.reason )
            return False
        elif hasattr( e, 'code' ):
            print( 'The server couldn\'t fulfill the request -> ', e.code )
            return False
    else:
        with open( filename, 'wb' ) as fo:
            fo.write( response.read() )
            print( 'Url saved as %s' % filename )
        return True

##

def main():
    test_url = 'http://cdn.sstatic.net/stackoverflow/img/favicon.ico'

    from_url( test_url )

if __name__ == '__main__':
    main()

另一种方法是通过fastai库。这对我来说就像一个魔法。我面对一个SSL: CERTIFICATE_VERIFY_FAILED错误使用urlretrieve,所以我尝试了。

url = 'https://www.linkdoesntexist.com/lennon.jpg'
fastai.core.download_url(url,'image1.jpg', show_progress=False)