这是我的代码:
import urllib2.request
response = urllib2.urlopen("http://www.google.com")
html = response.read()
print(html)
任何帮助吗?
这是我的代码:
import urllib2.request
response = urllib2.urlopen("http://www.google.com")
html = response.read()
print(html)
任何帮助吗?
当前回答
对于使用Python 2(测试版本2.7.3和2.6.8)和Python 3(3.2.3和3.3.2+)的脚本,请尝试:
#! /usr/bin/env python
try:
# For Python 3.0 and later
from urllib.request import urlopen
except ImportError:
# Fall back to Python 2's urllib2
from urllib2 import urlopen
html = urlopen("http://www.google.com/")
print(html.read())
其他回答
Python 3:
import urllib.request
wp = urllib.request.urlopen("http://google.com")
pw = wp.read()
print(pw)
Python 2:
import urllib
import sys
wp = urllib.urlopen("http://google.com")
for line in wp:
sys.stdout.write(line)
虽然我已经分别测试了两个代码的版本。
对于使用Python 2(测试版本2.7.3和2.6.8)和Python 3(3.2.3和3.3.2+)的脚本,请尝试:
#! /usr/bin/env python
try:
# For Python 3.0 and later
from urllib.request import urlopen
except ImportError:
# Fall back to Python 2's urllib2
from urllib2 import urlopen
html = urlopen("http://www.google.com/")
print(html.read())
而不是使用:
import urllib2
在python3中使用下面的代码
import urllib.request as urllib2
Python 2和Python 3中显示包内容的一些制表符补全。
在Python 2中:
In [1]: import urllib
In [2]: urllib.
urllib.ContentTooShortError urllib.ftpwrapper urllib.socket urllib.test1
urllib.FancyURLopener urllib.getproxies urllib.splitattr urllib.thishost
urllib.MAXFTPCACHE urllib.getproxies_environment urllib.splithost urllib.time
urllib.URLopener urllib.i urllib.splitnport urllib.toBytes
urllib.addbase urllib.localhost urllib.splitpasswd urllib.unquote
urllib.addclosehook urllib.noheaders urllib.splitport urllib.unquote_plus
urllib.addinfo urllib.os urllib.splitquery urllib.unwrap
urllib.addinfourl urllib.pathname2url urllib.splittag urllib.url2pathname
urllib.always_safe urllib.proxy_bypass urllib.splittype urllib.urlcleanup
urllib.base64 urllib.proxy_bypass_environment urllib.splituser urllib.urlencode
urllib.basejoin urllib.quote urllib.splitvalue urllib.urlopen
urllib.c urllib.quote_plus urllib.ssl urllib.urlretrieve
urllib.ftpcache urllib.re urllib.string
urllib.ftperrors urllib.reporthook urllib.sys
在Python 3中:
In [2]: import urllib.
urllib.error urllib.parse urllib.request urllib.response urllib.robotparser
In [2]: import urllib.error.
urllib.error.ContentTooShortError urllib.error.HTTPError urllib.error.URLError
In [2]: import urllib.parse.
urllib.parse.parse_qs urllib.parse.quote_plus urllib.parse.urldefrag urllib.parse.urlsplit
urllib.parse.parse_qsl urllib.parse.unquote urllib.parse.urlencode urllib.parse.urlunparse
urllib.parse.quote urllib.parse.unquote_plus urllib.parse.urljoin urllib.parse.urlunsplit
urllib.parse.quote_from_bytes urllib.parse.unquote_to_bytes urllib.parse.urlparse
In [2]: import urllib.request.
urllib.request.AbstractBasicAuthHandler urllib.request.HTTPSHandler
urllib.request.AbstractDigestAuthHandler urllib.request.OpenerDirector
urllib.request.BaseHandler urllib.request.ProxyBasicAuthHandler
urllib.request.CacheFTPHandler urllib.request.ProxyDigestAuthHandler
urllib.request.DataHandler urllib.request.ProxyHandler
urllib.request.FTPHandler urllib.request.Request
urllib.request.FancyURLopener urllib.request.URLopener
urllib.request.FileHandler urllib.request.UnknownHandler
urllib.request.HTTPBasicAuthHandler urllib.request.build_opener
urllib.request.HTTPCookieProcessor urllib.request.getproxies
urllib.request.HTTPDefaultErrorHandler urllib.request.install_opener
urllib.request.HTTPDigestAuthHandler urllib.request.pathname2url
urllib.request.HTTPErrorProcessor urllib.request.url2pathname
urllib.request.HTTPHandler urllib.request.urlcleanup
urllib.request.HTTPPasswordMgr urllib.request.urlopen
urllib.request.HTTPPasswordMgrWithDefaultRealm urllib.request.urlretrieve
urllib.request.HTTPRedirectHandler
In [2]: import urllib.response.
urllib.response.addbase urllib.response.addclosehook urllib.response.addinfo urllib.response.addinfourl
最简单的解决方案:
在Python 3.x中:
import urllib.request
url = "https://api.github.com/users?since=100"
request = urllib.request.Request(url)
response = urllib.request.urlopen(request)
data_content = response.read()
print(data_content)