我试图获得应用商店>业务的内容:

import requests
from lxml import html

page = requests.get("https://itunes.apple.com/in/genre/ios-business/id6000?mt=8")
tree = html.fromstring(page.text)

flist = []
plist = []
for i in range(0, 100):
    app = tree.xpath("//div[@class='column first']/ul/li/a/@href")
    ap = app[0]
    page1 = requests.get(ap)

当我尝试(0,2)的范围,它工作,但当我把范围在100,它显示这个错误:

Traceback (most recent call last):
  File "/home/preetham/Desktop/eg.py", line 17, in <module>
    page1 = requests.get(ap)
  File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 55, in get
    return request('get', url, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 383, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 486, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 378, in send
    raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='itunes.apple.com', port=443): Max retries exceeded with url: /in/app/adobe-reader/id469337564?mt=8 (Caused by <class 'socket.gaierror'>: [Errno -2] Name or service not known)

当前回答

当我在浏览器中运行路线时,我有同样的错误,但在邮差中,它工作得很好。我的问题是,在查询字符串之前的路由之后没有/。

127.0.0.1:5000 / api / v1 /搜索/ ?location=Madina提出错误,并在搜索对我有效后删除/。

其他回答

加上我自己的经验:

r = requests.get(download_url)

当我试图下载url中指定的文件时。

错误在于

HTTPSConnectionPool(host, port=443): Max retries exceeded with url (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))

我通过在函数中添加verify = False来纠正它,如下所示:

r = requests.get(download_url + filename)
open(filename, 'wb').write(r.content)

只需要导入时间 并补充:

time.sleep(6)

在for循环的某个地方,以避免在短时间内向服务器发送太多请求。 数字6意味着:6秒。 从1开始测试数字,直到达到有助于避免问题的最小秒数。

就这么做,

将下面的代码粘贴到page = requests.get(url)的位置:

import time

page = ''
while page == '':
    try:
        page = requests.get(url)
        break
    except:
        print("Connection refused by the server..")
        print("Let me sleep for 5 seconds")
        print("ZZzzzz...")
        time.sleep(5)
        print("Was a nice sleep, now let me continue...")
        continue

不客气:)

我的情况比较特殊。我试了上面的答案,没有一个管用。我突然想,是不是和我的网络代理有关?你知道,我在中国大陆,如果没有代理,我无法访问像谷歌这样的网站。然后我关掉了网络代理,问题就解决了。

即使在安装pyopenssl和尝试各种python版本后,我也无法在Windows上工作(而它在mac上工作得很好),所以我切换到urllib,它可以在python 3.6(从python .org)和3.7 (anaconda)上工作

import urllib 
from urllib.request import urlopen
html = urlopen("http://pythonscraping.com/pages/page1.html")
contents = html.read()
print(contents)