如何使用python解析xml feed?

发布于 2021-01-29 16:03:41

我正在尝试解析此xml(http://www.reddit.com/r/videos/top/.rss),但遇到了麻烦。我正在尝试在每个项目中保存youtube链接,但是由于“频道”子节点而遇到麻烦。我如何达到此级别,然后才能遍历所有项目?

#reddit parse
reddit_file = urllib2.urlopen('http://www.reddit.com/r/videos/top/.rss')
#convert to string:
reddit_data = reddit_file.read()
#close file because we dont need it anymore:
reddit_file.close()

#entire feed
reddit_root = etree.fromstring(reddit_data)
channel = reddit_root.findall('{http://purl.org/dc/elements/1.1/}channel')
print channel

reddit_feed=[]
for entry in channel:   
    #get description, url, and thumbnail
    desc = #not sure how to get this

    reddit_feed.append([desc])
关注者
0
被浏览
138
1 个回答
  • 面试哥
    面试哥 2021-01-29
    为面试而生,有面试问题,就找面试哥。

    你可以试试 findall('channel/item')

    import urllib2
    from xml.etree import ElementTree as etree
    #reddit parse
    reddit_file = urllib2.urlopen('http://www.reddit.com/r/videos/top/.rss')
    #convert to string:
    reddit_data = reddit_file.read()
    print reddit_data
    #close file because we dont need it anymore:
    reddit_file.close()
    
    #entire feed
    reddit_root = etree.fromstring(reddit_data)
    item = reddit_root.findall('channel/item')
    print item
    
    reddit_feed=[]
    for entry in item:   
        #get description, url, and thumbnail
        desc = entry.findtext('description')  
        reddit_feed.append([desc])
    


知识点
面圈网VIP题库

面圈网VIP题库全新上线,海量真题题库资源。 90大类考试,超10万份考试真题开放下载啦

去下载看看