我试图开发一个简单的网页刮板。我想提取没有HTML代码的文本。它适用于普通HTML,但不适用于JavaScript代码添加文本的某些页面。
例如,如果一些JavaScript代码添加了一些文本,我不能看到它,因为当我调用:
response = urllib2.urlopen(request)
我得到了原始文本而没有添加的文本(因为JavaScript是在客户端执行的)。
所以,我正在寻找一些解决这个问题的想法。
我试图开发一个简单的网页刮板。我想提取没有HTML代码的文本。它适用于普通HTML,但不适用于JavaScript代码添加文本的某些页面。
例如,如果一些JavaScript代码添加了一些文本,我不能看到它,因为当我调用:
response = urllib2.urlopen(request)
我得到了原始文本而没有添加的文本(因为JavaScript是在客户端执行的)。
所以,我正在寻找一些解决这个问题的想法。
当前回答
如果你以前曾经使用过python的Requests模块,我最近发现开发人员创建了一个名为Requests- html的新模块,现在它也有呈现JavaScript的能力。
你也可以访问https://html.python-requests.org/来了解更多关于这个模块的信息,或者如果你只对呈现JavaScript感兴趣,那么你可以访问https://html.python-requests.org/?#javascript-support来直接学习如何使用该模块使用Python来呈现JavaScript。
从本质上讲,一旦你正确安装了Requests-HTML模块,下面的例子,在上面的链接中显示,展示了你如何使用这个模块来抓取一个网站,并呈现网站中包含的JavaScript:
from requests_html import HTMLSession
session = HTMLSession()
r = session.get('http://python-requests.org/')
r.html.render()
r.html.search('Python 2 will retire in only {months} months!')['months']
'<time>25</time>' #This is the result.
我最近从YouTube上的一个视频中了解到这一点。点击这里!观看YouTube上演示该模块如何工作的视频。
其他回答
我个人更喜欢在单独的容器中使用scrapy和selenium和dockerizing。通过这种方式,你既可以轻松安装,也可以抓取几乎所有包含某种形式javascript的现代网站。这里有一个例子:
使用scrapy startproject创建你的scraper并编写你的蜘蛛,骨架可以像这样简单:
import scrapy
class MySpider(scrapy.Spider):
name = 'my_spider'
start_urls = ['https://somewhere.com']
def start_requests(self):
yield scrapy.Request(url=self.start_urls[0])
def parse(self, response):
# do stuff with results, scrape items etc.
# now were just checking everything worked
print(response.body)
真正的魔力发生在middleware .py中。重写下载中间件中的两个方法__init__和process_request,方法如下:
# import some additional modules that we need
import os
from copy import deepcopy
from time import sleep
from scrapy import signals
from scrapy.http import HtmlResponse
from selenium import webdriver
class SampleProjectDownloaderMiddleware(object):
def __init__(self):
SELENIUM_LOCATION = os.environ.get('SELENIUM_LOCATION', 'NOT_HERE')
SELENIUM_URL = f'http://{SELENIUM_LOCATION}:4444/wd/hub'
chrome_options = webdriver.ChromeOptions()
# chrome_options.add_experimental_option("mobileEmulation", mobile_emulation)
self.driver = webdriver.Remote(command_executor=SELENIUM_URL,
desired_capabilities=chrome_options.to_capabilities())
def process_request(self, request, spider):
self.driver.get(request.url)
# sleep a bit so the page has time to load
# or monitor items on page to continue as soon as page ready
sleep(4)
# if you need to manipulate the page content like clicking and scrolling, you do it here
# self.driver.find_element_by_css_selector('.my-class').click()
# you only need the now properly and completely rendered html from your page to get results
body = deepcopy(self.driver.page_source)
# copy the current url in case of redirects
url = deepcopy(self.driver.current_url)
return HtmlResponse(url, body=body, encoding='utf-8', request=request)
不要忘记在settings.py文件中取消下一行的注释来启用这个中间件:
DOWNLOADER_MIDDLEWARES = {
'sample_project.middlewares.SampleProjectDownloaderMiddleware': 543,}
接下来是dockerization。从一个轻量级映像创建Dockerfile(我在这里使用python Alpine),复制你的项目目录到它,安装要求:
# Use an official Python runtime as a parent image
FROM python:3.6-alpine
# install some packages necessary to scrapy and then curl because it's handy for debugging
RUN apk --update add linux-headers libffi-dev openssl-dev build-base libxslt-dev libxml2-dev curl python-dev
WORKDIR /my_scraper
ADD requirements.txt /my_scraper/
RUN pip install -r requirements.txt
ADD . /scrapers
最后在docker-compose.yaml中把所有这些都整合在一起:
version: '2'
services:
selenium:
image: selenium/standalone-chrome
ports:
- "4444:4444"
shm_size: 1G
my_scraper:
build: .
depends_on:
- "selenium"
environment:
- SELENIUM_LOCATION=samplecrawler_selenium_1
volumes:
- .:/my_scraper
# use this command to keep the container running
command: tail -f /dev/null
运行docker-compose up -d。如果你是第一次这样做,它将需要一段时间来获取最新的硒/独立铬和构建你的刮刀图像以及。
完成后,您可以检查容器是否使用docker ps运行,还可以检查selenium容器的名称是否与传递给scraper容器的环境变量的名称相匹配(在这里,它是SELENIUM_LOCATION=samplecrawler_selenium_1)。
使用docker exec -ti YOUR_CONTAINER_NAME sh进入你的刮板容器,我的命令是docker exec -ti samplecrawler_my_scraper_1 sh, cd到正确的目录下,并用scrapy爬行my_spider运行你的刮板。
所有内容都在我的github页面上,你可以从这里获取
Selenium是抓取JS和Ajax内容的最佳工具。
查看这篇文章,了解如何使用Python从web中提取数据
$ pip install selenium
然后下载Chrome webdriver。
from selenium import webdriver
browser = webdriver.Chrome()
browser.get("https://www.python.org/")
nav = browser.find_element_by_id("mainnav")
print(nav.text)
容易,对吧?
把BeautifulSoup和Selenium混合在一起对我来说效果很好。
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as bs
driver = webdriver.Firefox()
driver.get("http://somedomain/url_that_delays_loading")
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "myDynamicElement"))) #waits 10 seconds until element is located. Can have other wait conditions such as visibility_of_element_located or text_to_be_present_in_element
html = driver.page_source
soup = bs(html, "lxml")
dynamic_text = soup.find_all("p", {"class":"class_name"}) #or other attributes, optional
else:
print("Couldnt locate element")
附注:你可以在这里找到更多的等待条件
这似乎是一个很好的解决方案,从一个伟大的博客文章
import sys
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from PyQt4.QtWebKit import *
from lxml import html
#Take this class for granted.Just use result of rendering.
class Render(QWebPage):
def __init__(self, url):
self.app = QApplication(sys.argv)
QWebPage.__init__(self)
self.loadFinished.connect(self._loadFinished)
self.mainFrame().load(QUrl(url))
self.app.exec_()
def _loadFinished(self, result):
self.frame = self.mainFrame()
self.app.quit()
url = 'http://pycoders.com/archive/'
r = Render(url)
result = r.frame.toHtml()
# This step is important.Converting QString to Ascii for lxml to process
# The following returns an lxml element tree
archive_links = html.fromstring(str(result.toAscii()))
print archive_links
# The following returns an array containing the URLs
raw_links = archive_links.xpath('//div[@class="campaign"]/a/@href')
print raw_links
Pyppeteer
你可以考虑Pyppeteer,它是Chrome/Chromium驱动程序前端的Python移植版本。
下面是一个简单的例子,展示了如何使用pyppeterer动态地访问被注入到页面中的数据:
import asyncio
from pyppeteer import launch
async def main():
browser = await launch({"headless": True})
[page] = await browser.pages()
# normally, you go to a live site...
#await page.goto("http://www.example.com")
# but for this example, just set the HTML directly:
await page.setContent("""
<body>
<script>
// inject content dynamically with JS, not part of the static HTML!
document.body.innerHTML = `<p>hello world</p>`;
</script>
</body>
""")
print(await page.content()) # shows that the `<p>` was inserted
# evaluate a JS expression in browser context and scrape the data
expr = "document.querySelector('p').textContent"
print(await page.evaluate(expr, force_expr=True)) # => hello world
await browser.close()
asyncio.get_event_loop().run_until_complete(main())
请参阅pyppeterer的参考文档。