我试图开发一个简单的网页刮板。我想提取没有HTML代码的文本。它适用于普通HTML,但不适用于JavaScript代码添加文本的某些页面。

例如,如果一些JavaScript代码添加了一些文本,我不能看到它,因为当我调用:

response = urllib2.urlopen(request)

我得到了原始文本而没有添加的文本(因为JavaScript是在客户端执行的)。

所以,我正在寻找一些解决这个问题的想法。


当前回答

简单快捷的解决方案:

我也遇到过同样的问题。我想刮一些数据是用JavaScript构建的。如果我只用BeautifulSoup从这个网站抓取文本,那么我就以文本中的标签结束。 我想渲染这个标签,并将从中抓取信息。 另外,我不想使用像Scrapy和selenium这样的笨重框架。

我发现请求模块的get方法接受url,它实际上呈现脚本标签。

例子:

import requests
custom_User_agent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0"
url = "https://www.abc.xyz/your/url"
response = requests.get(url, headers={"User-Agent": custom_User_agent})
html_text = response.text

这将呈现加载站点和呈现标签。

希望这将有助于作为快速和简单的解决方案,渲染网站加载脚本标签。

其他回答

也许硒可以做到。

from selenium import webdriver
import time

driver = webdriver.Firefox()
driver.get(url)
time.sleep(5)
htmlSource = driver.page_source

这似乎是一个很好的解决方案,从一个伟大的博客文章

import sys  
from PyQt4.QtGui import *  
from PyQt4.QtCore import *  
from PyQt4.QtWebKit import *  
from lxml import html 

#Take this class for granted.Just use result of rendering.
class Render(QWebPage):  
  def __init__(self, url):  
    self.app = QApplication(sys.argv)  
    QWebPage.__init__(self)  
    self.loadFinished.connect(self._loadFinished)  
    self.mainFrame().load(QUrl(url))  
    self.app.exec_()  

  def _loadFinished(self, result):  
    self.frame = self.mainFrame()  
    self.app.quit()  

url = 'http://pycoders.com/archive/'  
r = Render(url)  
result = r.frame.toHtml()
# This step is important.Converting QString to Ascii for lxml to process

# The following returns an lxml element tree
archive_links = html.fromstring(str(result.toAscii()))
print archive_links

# The following returns an array containing the URLs
raw_links = archive_links.xpath('//div[@class="campaign"]/a/@href')
print raw_links

Playwright-Python

还有一种选择是剧作家- Python,它是微软剧作家(本身是受木偶大师影响的浏览器自动化库)到Python的移植。

下面是选择一个元素并抓取它的文本的最小示例:

from playwright.sync_api import sync_playwright

with sync_playwright() as p:
    browser = p.chromium.launch()
    page = browser.new_page()
    page.goto("http://whatsmyuseragent.org/")
    ua = page.query_selector(".user-agent");
    print(ua.text_content())
    browser.close()

我个人更喜欢在单独的容器中使用scrapy和selenium和dockerizing。通过这种方式,你既可以轻松安装,也可以抓取几乎所有包含某种形式javascript的现代网站。这里有一个例子:

使用scrapy startproject创建你的scraper并编写你的蜘蛛,骨架可以像这样简单:

import scrapy


class MySpider(scrapy.Spider):
    name = 'my_spider'
    start_urls = ['https://somewhere.com']

    def start_requests(self):
        yield scrapy.Request(url=self.start_urls[0])


    def parse(self, response):

        # do stuff with results, scrape items etc.
        # now were just checking everything worked

        print(response.body)

真正的魔力发生在middleware .py中。重写下载中间件中的两个方法__init__和process_request,方法如下:

# import some additional modules that we need
import os
from copy import deepcopy
from time import sleep

from scrapy import signals
from scrapy.http import HtmlResponse
from selenium import webdriver

class SampleProjectDownloaderMiddleware(object):

def __init__(self):
    SELENIUM_LOCATION = os.environ.get('SELENIUM_LOCATION', 'NOT_HERE')
    SELENIUM_URL = f'http://{SELENIUM_LOCATION}:4444/wd/hub'
    chrome_options = webdriver.ChromeOptions()

    # chrome_options.add_experimental_option("mobileEmulation", mobile_emulation)
    self.driver = webdriver.Remote(command_executor=SELENIUM_URL,
                                   desired_capabilities=chrome_options.to_capabilities())


def process_request(self, request, spider):

    self.driver.get(request.url)

    # sleep a bit so the page has time to load
    # or monitor items on page to continue as soon as page ready
    sleep(4)

    # if you need to manipulate the page content like clicking and scrolling, you do it here
    # self.driver.find_element_by_css_selector('.my-class').click()

    # you only need the now properly and completely rendered html from your page to get results
    body = deepcopy(self.driver.page_source)

    # copy the current url in case of redirects
    url = deepcopy(self.driver.current_url)

    return HtmlResponse(url, body=body, encoding='utf-8', request=request)

不要忘记在settings.py文件中取消下一行的注释来启用这个中间件:

DOWNLOADER_MIDDLEWARES = {
'sample_project.middlewares.SampleProjectDownloaderMiddleware': 543,}

接下来是dockerization。从一个轻量级映像创建Dockerfile(我在这里使用python Alpine),复制你的项目目录到它,安装要求:

# Use an official Python runtime as a parent image
FROM python:3.6-alpine

# install some packages necessary to scrapy and then curl because it's  handy for debugging
RUN apk --update add linux-headers libffi-dev openssl-dev build-base libxslt-dev libxml2-dev curl python-dev

WORKDIR /my_scraper

ADD requirements.txt /my_scraper/

RUN pip install -r requirements.txt

ADD . /scrapers

最后在docker-compose.yaml中把所有这些都整合在一起:

version: '2'
services:
  selenium:
    image: selenium/standalone-chrome
    ports:
      - "4444:4444"
    shm_size: 1G

  my_scraper:
    build: .
    depends_on:
      - "selenium"
    environment:
      - SELENIUM_LOCATION=samplecrawler_selenium_1
    volumes:
      - .:/my_scraper
    # use this command to keep the container running
    command: tail -f /dev/null

运行docker-compose up -d。如果你是第一次这样做,它将需要一段时间来获取最新的硒/独立铬和构建你的刮刀图像以及。

完成后,您可以检查容器是否使用docker ps运行,还可以检查selenium容器的名称是否与传递给scraper容器的环境变量的名称相匹配(在这里,它是SELENIUM_LOCATION=samplecrawler_selenium_1)。

使用docker exec -ti YOUR_CONTAINER_NAME sh进入你的刮板容器,我的命令是docker exec -ti samplecrawler_my_scraper_1 sh, cd到正确的目录下,并用scrapy爬行my_spider运行你的刮板。

所有内容都在我的github页面上,你可以从这里获取

Selenium是抓取JS和Ajax内容的最佳工具。

查看这篇文章,了解如何使用Python从web中提取数据

$ pip install selenium

然后下载Chrome webdriver。

from selenium import webdriver

browser = webdriver.Chrome()

browser.get("https://www.python.org/")

nav = browser.find_element_by_id("mainnav")

print(nav.text)

容易,对吧?