Scrapy CrawlSpider用于AJAX内容

发布于 2021-01-29 15:06:20

我正在尝试抓取新闻文章的网站。我的start_url包含:

(1)每个文章的链接:http :
//example.com/symbol/TSLA

(2)一个“更多”按钮,可进行AJAX调用,以在同一start_url中动态加载更多文章:http
:
//example.com/account/ajax_headlines_content?
type=in_focus_articles&page=0&slugs=tsla&is_symbol_page=
true

AJAX调用的参数是“页面”,每次单击“更多”按钮时都会增加。例如,一次单击“更多”将加载其他n篇文章,并在“更多”按钮的onClick事件中更新page参数,以便下次单击“更多”时,将加载两篇文章的“页面”(假设“页面“
0最初已加载,而“页面” 1首次点击已加载)。

对于每个“页面”,我想使用“规则”抓取每篇文章的内容,但是我不知道有多少个“页面”,并且我不想选择任意的m(例如10k)。我似乎无法弄清楚如何进行设置。

从这个问题,Scrapy抓取网址才能,我试图创造潜力的URL网址列表,但我不能确定如何以及在何处解析前一个URL,并确保以后从池中发送一个新的URL包含新闻链接为CrawlSpider。我的规则将响应发送到parse_items回调,其中将解析文章内容。

在应用规则并调用parse_items之前,是否有办法观察链接页面的内容(类似于BaseSpider示例),以便我可以知道何时停止爬网?

简化的代码(为清楚起见,我删除了一些正在解析的字段):

class ExampleSite(CrawlSpider):

    name = "so"
    download_delay = 2

    more_pages = True
    current_page = 0

    allowed_domains = ['example.com']

    start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
                      '&slugs=tsla&is_symbol_page=true']

    ##could also use
    ##start_urls = ['http://example.com/symbol/tsla']

    ajax_urls = []                                                                                                                                                                                                                                                                                                                                                                                                                          
    for i in range(1,1000):
        ajax_urls.append('http://example.com/account/ajax_headlines_content?type=in_focus_articles&page='+str(i)+
                      '&slugs=tsla&is_symbol_page=true')

    rules = (
             Rule(SgmlLinkExtractor(allow=('/symbol/tsla', ))),
             Rule(SgmlLinkExtractor(allow=('/news-article.*tesla.*', '/article.*tesla.*', )), callback='parse_item')
            )

        ##need something like this??
        ##override parse?
        ## if response.body == 'no results':
            ## self.more_pages = False
            ## ##stop crawler??   
        ## else: 
            ## self.current_page = self.current_page + 1
            ## yield Request(self.ajax_urls[self.current_page], callback=self.parse_start_url)


    def parse_item(self, response):

        self.log("Scraping: %s" % response.url, level=log.INFO)

        hxs = Selector(response)

        item = NewsItem()

        item['url'] = response.url
        item['source'] = 'example'
        item['title'] = hxs.xpath('//title/text()')
        item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')

        yield item
关注者
0
被浏览
66
1 个回答
  • 面试哥
    面试哥 2021-01-29
    为面试而生,有面试问题,就找面试哥。

    在这里,爬网蜘蛛可能太有限了。如果您需要大量逻辑,通常最好从Spider继承。

    Scrapy提供了CloseSpider异常,当您需要在某些情况下停止解析时,可以引发该异常。您正在爬网的页面返回一条消息“您的股票上没有焦点文章”,当您超过最大页面数时,您可以检查该消息并在发生此消息时停止迭代。

    在您的情况下,您可以执行以下操作:

    from scrapy.spider import Spider
    from scrapy.http import Request
    from scrapy.exceptions import CloseSpider
    
    class ExampleSite(Spider):
        name = "so"
        download_delay = 0.1
    
        more_pages = True
        next_page = 1
    
        start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
                          '&slugs=tsla&is_symbol_page=true']
    
        allowed_domains = ['example.com']
    
        def create_ajax_request(self, page_number):
            """
            Helper function to create ajax request for next page.
            """
            ajax_template = 'http://example.com/account/ajax_headlines_content?type=in_focus_articles&page={pagenum}&slugs=tsla&is_symbol_page=true'
    
            url = ajax_template.format(pagenum=page_number)
            return Request(url, callback=self.parse)
    
        def parse(self, response):
            """
            Parsing of each page.
            """
            if "There are no Focus articles on your stocks." in response.body:
                self.log("About to close spider", log.WARNING)
                raise CloseSpider(reason="no more pages to parse")
    
    
            # there is some content extract links to articles
            sel = Selector(response)
            links_xpath = "//div[@class='symbol_article']/a/@href"
            links = sel.xpath(links_xpath).extract()
            for link in links:
                url = urljoin(response.url, link)
                # follow link to article
                # commented out to see how pagination works
                #yield Request(url, callback=self.parse_item)
    
            # generate request for next page
            self.next_page += 1
            yield self.create_ajax_request(self.next_page)
    
        def parse_item(self, response):
            """
            Parsing of each article page.
            """
            self.log("Scraping: %s" % response.url, level=log.INFO)
    
            hxs = Selector(response)
    
            item = NewsItem()
    
            item['url'] = response.url
            item['source'] = 'example'
            item['title'] = hxs.xpath('//title/text()')
            item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')
    
            yield item
    


知识点
面圈网VIP题库

面圈网VIP题库全新上线,海量真题题库资源。 90大类考试,超10万份考试真题开放下载啦

去下载看看