如下圖以所示,當(dāng)頁(yè)面是整個(gè)城市的美食板塊的時(shí)候,例如西安美食的網(wǎng)址是"http://www.dianping.com/xian/ch10",可以正常爬取到數(shù)據(jù)(如圖一)。
但是由于頁(yè)面限制50頁(yè),隨就嘗試根據(jù)各區(qū)和各美食種類爬取,當(dāng)將頁(yè)面改為所選區(qū)及美食種類后,例如下列地址“http://www.dianping.com/xian/...“,便無(wú)法獲取數(shù)據(jù)了。但是網(wǎng)站是能正常打開(kāi)的。
拜托大神看一下究竟是哪里出錯(cuò)了!萬(wàn)分感激。
我嘗試了用 scrapy shell跑這個(gè)網(wǎng)站,但是用view(response)打開(kāi)的是一個(gè)空的txt文檔,不是打開(kāi)一個(gè)網(wǎng)頁(yè)。
以下是我的代碼
dianping.py
import scrapy
import requests
from scrapy.selector import Selector
from ..items import DianpingItem
class DianpingSpider(scrapy.Spider):
name = "dianping"
start_urls = ['http://www.dianping.com/xian/ch10/r8915/']
def parse(self, response):
Dianping = DianpingItem()
title_list = response.xpath("http://div[@class='tit']/a[1]/h4/text()").extract()
level_list = response.xpath("http://div[@class='comment']/span[1]/@title").extract()
comment_list = response.xpath("http://div[@class='comment']/a[1]/b/text()").extract()
price_list = response.xpath("http://div[@class='comment']/a[2]/b/text()").extract()
kouwei_list = response.xpath("http://span[@class='comment-list']/span[1]/b/text()").extract()
huanjing_list = response.xpath("http://span[@class='comment-list']/span[2]/b/text()").extract()
fuwu_list = response.xpath("http://span[@class='comment-list']/span[2]/b/text()").extract()
caixi_list = response.xpath("http://div[@class='tag-addr']/a[1]/span/text()").extract()
area_list = response.xpath("http://div[@class='tag-addr']/a[2]/span/text()").extract()
address_list = response.xpath("http://div[@class='tag-addr']//span[@class='addr']/text()").extract()
recommend_list = response.xpath("http://div[@class='tit']/a[1]/h4/text()").extract()
# recommend_list2 = recommend_list1[0].xpath('string(.)').extract()
# recommend_list = [item.replace(' ', '').replace('\n', '|') for item in recommend_list2]
for i1, i2, i3, i4, i5,i6,i7,i8,i9,i10,i11 in zip(title_list,level_list,comment_list,price_list,kouwei_list,huanjing_list,fuwu_list,caixi_list,area_list,address_list,recommend_list):
Dianping['title'] = i1
Dianping['level'] = i2
Dianping['comment'] = i3
Dianping['price'] = i4
Dianping['kouwei'] = i5
Dianping['huanjing'] = i6
Dianping['fuwu'] = i7
Dianping['caixi'] = i8
Dianping['area'] = i9
Dianping['address'] = i10
Dianping['recommend'] = i11
yield Dianping
next_pages = response.xpath("http://div[@class='page']/a[@class='next']/@href").extract()
if next_pages:
yield scrapy.Request(next_pages[0], callback=self.parse)
setting.py
# -*- coding: utf-8 -*-
# Scrapy settings for dianping project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'dianping'
SPIDER_MODULES = ['dianping.spiders']
NEWSPIDER_MODULE = 'dianping.spiders'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'dianping (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 1
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 5
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
COOKIES_ENABLED = True
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.9',
'Accept-Encoding': 'gzip, deflate'
}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'dianping.middlewares.DianpingSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
# 'dianping.middlewares.DianpingDownloaderMiddleware': 543,
# }
# DOWNLOADER_MIDDLEWARES = {
# 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 543,
# 'dianping.middlewares.ProxyMiddleWare': 125,
# 'dianping.middlewares.DianpingDownloaderMiddleware': 543,
# }
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'dianping.pipelines.DianpingPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
北大青鳥(niǎo)APTECH成立于1999年。依托北京大學(xué)優(yōu)質(zhì)雄厚的教育資源和背景,秉承“教育改變生活”的發(fā)展理念,致力于培養(yǎng)中國(guó)IT技能型緊缺人才,是大數(shù)據(jù)專業(yè)的國(guó)家
北大青鳥(niǎo)中博軟件學(xué)院創(chuàng)立于2003年,作為華東區(qū)著名互聯(lián)網(wǎng)學(xué)院和江蘇省首批服務(wù)外包人才培訓(xùn)基地,中博成功培育了近30000名軟件工程師走向高薪崗位,合作企業(yè)超4
中公教育集團(tuán)創(chuàng)建于1999年,經(jīng)過(guò)二十年潛心發(fā)展,已由一家北大畢業(yè)生自主創(chuàng)業(yè)的信息技術(shù)與教育服務(wù)機(jī)構(gòu),發(fā)展為教育服務(wù)業(yè)的綜合性企業(yè)集團(tuán),成為集合面授教學(xué)培訓(xùn)、網(wǎng)
達(dá)內(nèi)教育集團(tuán)成立于2002年,是一家由留學(xué)海歸創(chuàng)辦的高端職業(yè)教育培訓(xùn)機(jī)構(gòu),是中國(guó)一站式人才培養(yǎng)平臺(tái)、一站式人才輸送平臺(tái)。2014年4月3日在美國(guó)成功上市,融資1
曾工作于聯(lián)想擔(dān)任系統(tǒng)開(kāi)發(fā)工程師,曾在博彥科技股份有限公司擔(dān)任項(xiàng)目經(jīng)理從事移動(dòng)互聯(lián)網(wǎng)管理及研發(fā)工作,曾創(chuàng)辦藍(lán)懿科技有限責(zé)任公司從事總經(jīng)理職務(wù)負(fù)責(zé)iOS教學(xué)及管理工作。
浪潮集團(tuán)項(xiàng)目經(jīng)理。精通Java與.NET 技術(shù), 熟練的跨平臺(tái)面向?qū)ο箝_(kāi)發(fā)經(jīng)驗(yàn),技術(shù)功底深厚。 授課風(fēng)格 授課風(fēng)格清新自然、條理清晰、主次分明、重點(diǎn)難點(diǎn)突出、引人入勝。
精通HTML5和CSS3;Javascript及主流js庫(kù),具有快速界面開(kāi)發(fā)的能力,對(duì)瀏覽器兼容性、前端性能優(yōu)化等有深入理解。精通網(wǎng)頁(yè)制作和網(wǎng)頁(yè)游戲開(kāi)發(fā)。
具有10 年的Java 企業(yè)應(yīng)用開(kāi)發(fā)經(jīng)驗(yàn)。曾經(jīng)歷任德國(guó)Software AG 技術(shù)顧問(wèn),美國(guó)Dachieve 系統(tǒng)架構(gòu)師,美國(guó)AngelEngineers Inc. 系統(tǒng)架構(gòu)師。