鍍金池/ 問答/Python  網(wǎng)絡(luò)安全  HTML/ python爬取酷狗top500,分頁爬取的問題

python爬取酷狗top500,分頁爬取的問題

題目描述

跟著書爬一下酷狗音樂top500
我爬取的思路是先尋找所有網(wǎng)頁,然后再請求所有網(wǎng)頁,并將他們的內(nèi)容用beautifulsoup解析出來,最后直接print,但是卻報(bào)錯(cuò)了,我看了一下思路應(yīng)該不會有什么問題???求各位大神幫助,
報(bào)錯(cuò):
No connection adapters were found for '['http://www.kugou.com/yy/rank/...']'
我的代碼如下:

相關(guān)代碼

import requests
from bs4 import BeautifulSoup
import time
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:61.0) Gecko/20100101 Firefox/61.0'
}#請求頭
def get_info(url):         #獲取網(wǎng)站信息
    res = requests.get(url,headers =  headers)  #請求網(wǎng)頁
    soup = BeautifulSoup(res.text,'lxml')   #解析數(shù)據(jù)
    #名次:
    nums = soup.select('.pc_temp_songlist > ul:nth-of-type(1) > li > span:nth-of-type(3) > strong:nth-of-type(1)')
    #歌手-名字:
    titles = soup.select('.pc_temp_songlist > ul:nth-of-type(1) > li > a:nth-of-type(4)')
    #時(shí)間:
    times = soup.select('.pc_temp_songlist > ul:nth-of-type(1) > li > span:nth-of-type(5) > span:nth-of-type(4)')
    for num,title,time in zip(nums,titles,times):
        data = {
            '名次':num.get_text().strip(),
            '歌手':title.get("title").get_text().split('-')[0],
            '名字':prices.get("title").get_text().split('-')[1],
            '時(shí)間':address.get_text().strip(),
        }
        print(data)
        time.sleep(2)

    

主程序


#主程序
urls = ['http://www.kugou.com/yy/rank/home/{}-8888.html?from=rank'.format(number) for number in range(1,24)]  #收集1-23頁
for single_url in urls:
    get_info(single_url)
    time.sleep(5)

錯(cuò)誤信息

主程序直接卡在那里沒有任何信息打出來,于是我就試了一下第一頁的爬取['http://www.kugou.com/yy/rank/home/1-8888.html?from=rank'],結(jié)果報(bào)錯(cuò)了,很奇怪好像是沒連上的意思,我直接點(diǎn)開網(wǎng)頁是能連上的。
代碼如下:

url = ['http://www.kugou.com/yy/rank/home/1-8888.html?from=rank']
get_info(url)

報(bào)錯(cuò)如下:

No connection adapters were found for '['http://www.kugou.com/yy/rank/home/1-8888.html?from=rank']'

百度了一下這個(gè)報(bào)錯(cuò)試了一下沒轍,而且百度上此報(bào)錯(cuò)內(nèi)容較少 拜托各位!

回答
編輯回答
凝雅

nums = soup.select('.pc_temp_songlist > ul:nth-of-type(1) > li > span:nth-of-type(3) > strong:nth-of-type(1)')
titles = soup.select('.pc_temp_songlist > ul:nth-of-type(1) > li > a:nth-of-type(4)')
times = soup.select('.pc_temp_songlist > ul:nth-of-type(1) > li > span:nth-of-type(5) > span:nth-of-type(4)')

這個(gè)數(shù)據(jù)解析有問題啊,所以當(dāng)然沒有打印輸出了
你覺得卡住,每次循環(huán)要sleep 7秒,而且輸出為空造成的假象吧
以下代碼供參考:
import requests
from bs4 import BeautifulSoup

url='http://www.kugou.com/yy/rank/...{}-8888.html?from=rank'

def get_info(url):

res=requests.get(url)
soup=BeautifulSoup(res.text,'lxml')
infoes=soup.select('div.pc_temp_songlist ul li ')
for info in infoes:
    nums=info.select('span.pc_temp_num')[0].text.strip()
    singer,name=info['title'].split('-',1)
    times=info.select('span.pc_temp_tips_r span.pc_temp_time')[0].text.strip()
    print({'名次':nums,'歌手':singer,'歌名':name,'時(shí)長':times})

if __name__=='__main__':

urls = [url.format(i) for i in range(1, 24)]
for url in urls:
    get_info(url)

2017年6月11日 18:45