Scrapy项目(东莞阳光网)---利用CrawlSpider爬取贴子内容,不含图片
1、创建Scrapy项目
scapy startproject dongguan
2.进入项目目录,使用命令genspider创建Spider
scrapy genspider -t crawl sunwz "wz.sun0769.com"
3、定义要抓取的数据(处理items.py文件)
# -*- coding: utf-8 -*-
import scrapyclass DongguanItem(scrapy.Item):# 贴子编号number = scrapy.Field()# 贴子标题title = scrapy.Field()# 贴子内容content = scrapy.Field()# 贴子urlurl = scrapy.Field()
4、编写提取item数据的Spider(在spiders文件夹下:sunwz.py)
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
# 如果下面在pycharm中有红色波浪线,参照这个设置:https://blog.csdn.net/z564359805/article/details/80650843
from dongguan.items import DongguanItemclass SunwzSpider(CrawlSpider):name = 'sunwz'allowed_domains = ['wz.sun0769.com']start_urls = ['http://wz.sun0769.com/index.php/question/report?page=0']# LinkExtractor()贴子的匹配规则,遵循正则rules = (# 没有callback,follow默认True,有callback默认为FalseRule(LinkExtractor(allow=r"report\?page=\d+"),follow=True),Rule(LinkExtractor(allow=r"html/question/\d+/\d+.shtml"), callback='parse_item',follow=False),# 发现中间的网站贴子地址第60510页左右和之前的不一样例如:http://d.wz.sun0769.com/index.php/question/show?id=267700,多加一个规则Rule(LinkExtractor(allow=r"/question/show\?id=\d+"), callback='parse_item',follow=False),)print("数据处理中……")def parse_item(self, response):item = DongguanItem()# 每个贴子中的标题,strip()去除首尾空格item['title'] = response.xpath('//div[@class="pagecenter p3"]//strong/text()').extract()[0].strip()# 根据标题提取出编号item['number'] = item['title'].split(":")[-1].strip()# 先取出贴子内容有图的情况content= response.xpath('//div[@class="contentext"]/text()').extract()if len(content) == 0:# 代表没有图content = response.xpath('//div[@class="c1 text14_2"]/text()').extract()# content为列表,通过join方法拼接为字符串,并去除首尾空格,以及将内容中的空格去掉item['content'] = "".join(content).strip().replace("\xa0","")else:print("此贴子有图片…………")item['content'] = "".join(content).strip().replace("\xa0","")# 贴子的链接地址item['url'] = response.urlyield item
5.处理pipelines管道文件保存数据,可将结果保存到文件中(pipelines.py)
# -*- coding: utf-8 -*-
import json# 转码操作,继承json.JSONEncoder的子类,在json目录下的encoder.py中
class MyEncoder(json.JSONEncoder):def default(self, o):if isinstance(o, bytes):return str(o, encoding='utf-8')return JSONEncoder.default(self, o)class DongguanPipeline(object):def __init__(self):self.file = open("dongguan.json", 'w', encoding='utf-8')def process_item(self, item, spider):text = json.dumps(dict(item), ensure_ascii=False, cls=MyEncoder) + '\n'self.file.write(text)return itemdef close_spider(self, spider):print("数据处理完毕,谢谢使用!")self.file.close()
6.配置settings文件(settings.py)
# Obey robots.txt rules,具体含义参照:https://blog.csdn.net/z564359805/article/details/80691677
ROBOTSTXT_OBEY = False# Override the default request headers:添加User-Agent信息
DEFAULT_REQUEST_HEADERS = {'User-Agent': 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0);',# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',# 'Accept-Language': 'en',
}# Configure item pipelines去掉下面注释,打开管道文件
ITEM_PIPELINES = {'dongguan.pipelines.DongguanPipeline': 300,
}# 还可以将日志存到本地文件中(可选添加设置)
LOG_FILE = "dongguanlog.log"
LOG_LEVEL = "DEBUG"
7.以上设置完毕,进行爬取:执行项目命令crawl,启动Spider:
scrapy crawl sunwz