如果您在UNIX平臺上工作,那么最好安裝 IPython。 如果有IPython的無法訪問,您也可以使用bpython。
[settings] shell = bpython
scrapy shell <url>
S.N |
快捷方式和說明
|
---|---|
1 |
shelp()
它提供了可用對象和快捷方式的幫助選項
|
2 |
fetch(request_or_url)
它會從請求或URL的響應(yīng)收集相關(guān)對象可能的更新
|
3 |
view(response) 可以在本地瀏覽器查看特定請求的響應(yīng),觀察和正確顯示外部鏈接,追加基本標(biāo)簽到響應(yīng)正文。 |
S.N. |
對象和說明
|
---|---|
1 |
crawler
它指定當(dāng)前爬行對象
|
2 |
spider
如果對于當(dāng)前網(wǎng)址沒有蜘蛛,那么它將通過定義新的蜘蛛處理URL或蜘蛛對象
|
3 |
request
它指定了最后采集頁面請求對象
|
4 |
response
它指定了最后采集頁面響應(yīng)對象
|
5 |
settings
它提供當(dāng)前Scrapy設(shè)置
|
scrapy shell 'http://scrapy.org' --nolog
[s] Available Scrapy objects: [s] crawler [s] item {} [s] request [s] response <200 http://scrapy.org> [s] settings [s] spider [s] Useful shortcuts: [s] shelp() Provides available objects and shortcuts with help option [s] fetch(req_or_url) Collects the response from the request or URL and associated objects will get update [s] view(response) View the response for the given request
>> response.xpath('//title/text()').extract_first() u'Scrapy | A Fast and Powerful Scraping and Web Crawling Framework' >> fetch("http://reddit.com") [s] Available Scrapy objects: [s] crawler [s] item {} [s] request [s] response <200 https://www.yiibai.com/> [s] settings [s] spider [s] Useful shortcuts: [s] shelp() Shell help (print this help) [s] fetch(req_or_url) Fetch request (or URL) and update local objects [s] view(response) View response in a browser >> response.xpath('//title/text()').extract() [u'reddit: the front page of the internet'] >> request = request.replace(method="POST") >> fetch(request) [s] Available Scrapy objects: [s] crawler ...
import scrapy class SpiderDemo(scrapy.Spider): name = "spiderdemo" start_urls = [ "http://yiibai.com", "http://yiibai.org", "http://yiibai.net", ] def parse(self, response): # You can inspect one specific response if ".net" in response.url: from scrapy.shell import inspect_response inspect_response(response, self)
scrapy.shell.inspect_response
2016-02-08 18:15:20-0400 [scrapy] DEBUG: Crawled (200) (referer: None) 2016-02-08 18:15:20-0400 [scrapy] DEBUG: Crawled (200) (referer: None) 2016-02-08 18:15:20-0400 [scrapy] DEBUG: Crawled (200) (referer: None) [s] Available Scrapy objects: [s] crawler ... >> response.url 'http://yiibai.org'
>> response.xpath('//div[@class="val"]') It displays the output as []
>> view(response) It displays the response as True