从第二组链接中抓取,抓取页面

发布于 2021-01-29 15:04:54

我已经通过Scrapy文档今天一直在进行,并试图获得一个工作版本-
https://docs.scrapy.org/en/latest/intro/tutorial.html#our-first-
spider
-在现实世界的例子。我的示例稍有不同,它有2个下一页,即

start_url>城市页面>单位页面

这是我要从中获取数据的单位页面。

我的代码:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://www.unitestudents.com/',
            ]

    def parse(self, response):
        for quote in response.css('div.property-body'):
            yield {
                'name': quote.xpath('//span/a/text()').extract(),
                'type': quote.xpath('//div/h4/text()').extract(),
                'price_amens': quote.xpath('//div/p/text()').extract(),
                'distance_beds': quote.xpath('//li/p/text()').extract()
            }

            # Purpose is to crawl links of cities
            next_page = response.css('a.listing-item__link::attr(href)').extract_first()
            if next_page is not None:
                next_page = response.urljoin(next_page)
                yield scrapy.Request(next_page, callback=self.parse)

            # Purpose is to crawl links of units
            next_unit_page = response.css(response.css('a.text-highlight__inner::attr(href)').extract_first())
            if next_unit_page is not None:
                                          next_unit_page = response.urljoin(next_unit_page)
                                          yield scrapy.Request(next_unit_page, callback=self.parse)

但是当我运行它时,我得到:

INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

因此,我认为我的代码未设置为检索上述流程中的链接,但不确定如何做到最好?

更新流程:

主页>城市页面>建筑页面>单元页面

仍然是我要从中获取数据的单位页面。

更新的代码:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://www.unitestudents.com/',
            ]

    def parse(self, response):
        for quote in response.css('div.site-wrapper'):
            yield {
                'area_name': quote.xpath('//div/ul/li/a/span/text()').extract(),
                'type': quote.xpath('//div/div/div/h1/span/text()').extract(),
                'period': quote.xpath('/html/body/div/div/section/div/form/h4/span/text()').extract(),
                'duration_weekly': quote.xpath('//html/body/div/div/section/div/form/div/div/em/text()').extract(),
                'guide_total': quote.xpath('//html/body/div/div/section/div/form/div/div/p/text()').extract(),              
                'amenities': quote.xpath('//div/div/div/ul/li/p/text()').extract(),              
            }

            # Purpose is to crawl links of cities
            next_page = response.xpath('//html/body/div/footer/div/div/div/ul/li/a[@class="listing-item__link"]/@href').extract()
            if next_page is not None:
                next_page = response.urljoin(next_page)
                yield scrapy.Request(next_page, callback=self.parse)

            # Purpose is to crawl links of units
            next_unit_page = response.xpath('//li/div/h3/span/a/@href').extract()
            if next_unit_page is not None:
                                          next_unit_page = response.urljoin(next_unit_page)
                                          yield scrapy.Request(next_unit_page, callback=self.parse)

            # Purpose to crawl crawl pages on full unit info

            last_unit_page = response.xpath('//div/div/div[@class="content__btn"]/a/@href').extract()
            if last_unit_page is not None:
                last_unit_page = response.urljoin(last_unit_page)
                yield scrapy.Request(last_unit_page, callback=self.parse)
关注者
0
被浏览
49
1 个回答
  • 面试哥
    面试哥 2021-01-29
    为面试而生,有面试问题,就找面试哥。

    让我们从逻辑开始:

    1. 抓取首页-获取所有城市
    2. 抓取城市页面-获取所有单元网址
    3. 抓取单元页面-获取所有所需数据

    我已经在下面的示例中举例说明了如何实现此目的。我找不到您在示例代码中提到的所有信息,但是希望该代码足够清晰,以使您了解它的作用以及如何添加所需的信息。

    import scrapy
    
    
    class QuotesSpider(scrapy.Spider):
        name = "quotes"
        start_urls = [
            'http://www.unitestudents.com/',
                ]
    
        # Step 1
        def parse(self, response):
            for city in response.xpath('//select[@id="frm_homeSelect_city"]/option[not(contains(text(),"Select your city"))]/text()').extract(): # Select all cities listed in the select (exclude the "Select your city" option)
                yield scrapy.Request(response.urljoin("/"+city), callback=self.parse_citypage)
    
        # Step 2
        def parse_citypage(self, response):
            for url in response.xpath('//div[@class="property-header"]/h3/span/a/@href').extract(): #Select for each property the url
                yield scrapy.Request(response.urljoin(url), callback=self.parse_unitpage)
    
            # I could not find any pagination. Otherwise it would go here.
    
        # Step 3
        def parse_unitpage(self, response):
            unitTypes = response.xpath('//div[@class="room-type-block"]/h5/text()').extract() + response.xpath('//h4[@class="content__header"]/text()').extract()
            for unitType in unitTypes: # There can be multiple unit types so we yield an item for each unit type we can find.
                yield {
                    'name': response.xpath('//h1/span/text()').extract_first(),
                    'type': unitType,
                    # 'price': response.xpath('XPATH GOES HERE'), # Could not find a price on the page
                    # 'distance_beds': response.xpath('XPATH GOES HERE') # Could not find such info
                }
    

    我认为代码非常干净和简单。注释应阐明为什么我选择使用for循环。如果不清楚,请告诉我,我会尽力解释。



知识点
面圈网VIP题库

面圈网VIP题库全新上线,海量真题题库资源。 90大类考试,超10万份考试真题开放下载啦

去下载看看