pipelines.py 文件源码

python
阅读 23 收藏 0 点赞 0 评论 0

项目:my-scrapy 作者: azraelkuan 项目源码 文件源码
def process_item(self, item, spider):
            if self.__class__.__name__ in spider.pipelines:
                try:
                    conn = pymysql.connect(host='localhost', user='root', passwd='067116', db='gaode', charset='utf8')
                    cur = conn.cursor()
                    sql = 'insert into test(uid,`name`,address,tag,sub_tag,center,tel,pro_name,pro_center,city_name,' \
                          'city_center,ad_name,ad_center,distance,photo_urls,photo_exists,distributor) ' \
                          'values(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)'

                    data = (item['uid'], item['name'], item['address'], item['tag'], item['sub_tag'], item['center'], item['tel'],
                            item['pro_name'], item['pro_center'], item['city_name'], item['city_center'], item['ad_name'],
                            item['ad_center'], item['distance'], item['photo_urls'], item['photo_exists'], item['distributor'])
                    cur.execute(sql, data)
                    conn.commit()
                    cur.close()
                    conn.close()
                except:
                    print("**********exists**********")
            else:
                return item
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号