当前位置: 首页 > news >正文

网站地图怎么做、seo百度推广

网站地图怎么做、,seo百度推广,网站更换服务器怎么做,wordpress 招聘主题如果想要保存到excel中可以看我的这个爬虫 使用Scrapy 框架开启多进程爬取贝壳网数据保存到excel文件中,包括分页数据、详情页数据,新手保护期快来看!!仅供学习参考,别乱搞_爬取贝壳成交数据c端用户登录-CSDN博客 最终…

 如果想要保存到excel中可以看我的这个爬虫

使用Scrapy 框架开启多进程爬取贝壳网数据保存到excel文件中,包括分页数据、详情页数据,新手保护期快来看!!仅供学习参考,别乱搞_爬取贝壳成交数据c端用户登录-CSDN博客

 

最终数据展示 

QuotesSpider 爬虫程序

import scrapy
import refrom weibo_top.items import WeiboTopItemclass QuotesSpider(scrapy.Spider):name = "weibo_top"allowed_domains = ['s.weibo.com']def start_requests(self):yield scrapy.Request(url="https://s.weibo.com/top/summary?cate=realtimehot")def parse(self, response, **kwargs):trs = response.css('#pl_top_realtimehot > table > tbody > tr')count = 0for tr in trs:if count >= 30:  # 获取前3条数据break  # 停止处理后续数据item = WeiboTopItem()title = tr.css('.td-02 a::text').get()link = 'https://s.weibo.com/' + tr.css('.td-02 a::attr(href)').get()item['title'] = titleitem['link'] = linkif link:count += 1  # 增加计数器yield scrapy.Request(url=link, callback=self.parse_detail, meta={'item': item})else:yield itemdef parse_detail(self, response, **kwargs):item = response.meta['item']list_items = response.css('div.card-wrap[action-type="feed_list_item"]')limit = 0for li in list_items:if limit >= 1:break  # 停止处理后续数据else:content = li.xpath('.//p[@class="txt"]/text()').getall()processed_content = [re.sub(r'[^\u4e00-\u9fa5a-zA-Z0-9【】,]', '', text) for text in content]processed_content = [text.strip() for text in processed_content if text.strip()]processed_content = ','.join(processed_content).replace('【,','【')item['desc'] = processed_contentprint(processed_content)yield itemlimit += 1  # 增加计数器

item 定义数据结构

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass WeiboTopItem(scrapy.Item):title = scrapy.Field()  # '名称'link = scrapy.Field()  # '详情地址'desc = scrapy.Field()  # 'desc'pass

中间件 设置cookie\User-Agent\Host

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlfrom scrapy import signals
from fake_useragent import UserAgent
# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapterclass WeiboTopSpiderMiddleware:# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the spider middleware does not modify the# passed objects.@classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_spider_input(self, response, spider):# Called for each response that goes through the spider# middleware and into the spider.# Should return None or raise an exception.return Nonedef process_spider_output(self, response, result, spider):# Called with the results returned from the Spider, after# it has processed the response.# Must return an iterable of Request, or item objects.for i in result:yield idef process_spider_exception(self, response, exception, spider):# Called when a spider or process_spider_input() method# (from other spider middleware) raises an exception.# Should return either None or an iterable of Request or item objects.passdef process_start_requests(self, start_requests, spider):# Called with the start requests of the spider, and works# similarly to the process_spider_output() method, except# that it doesn’t have a response associated.# Must return only requests (not items).for r in start_requests:yield rdef spider_opened(self, spider):spider.logger.info("Spider opened: %s" % spider.name)class WeiboTopDownloaderMiddleware:# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the downloader middleware does not modify the# passed objects.def __init__(self):self.cookie_string = "SUB=_2AkMS10-nf8NxqwFRmfoXyG3jaoxxygHEieKki758JRMxHRl-yT9vqhIrtRB6OVdhSYUGwRsrtuQyFPy_aLfaay7wguyu; SUBP=0033WrSXqPxfM72-Ws9jqgMF55529P9D9WhBJpfihr9Mo_TDhk.fIHFo; _s_tentry=www.baidu.com; UOR=www.baidu.com,s.weibo.com,www.baidu.com; Apache=5259811159487.941.1709629772294; SINAGLOBAL=5259811159487.941.1709629772294; ULV=1709629772313:1:1:1:5259811159487.941.1709629772294:"# self.referer = "https://sh.ke.com/chengjiao/"@classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_request(self, request, spider):cookie_dict = self.get_cookie()request.cookies = cookie_dictrequest.headers['User-Agent'] = UserAgent().randomrequest.headers['Host'] = 's.weibo.com'# request.headers["referer"] = self.refererreturn Nonedef get_cookie(self):cookie_dict = {}for kv in self.cookie_string.split(";"):k = kv.split('=')[0]v = kv.split('=')[1]cookie_dict[k] = vreturn cookie_dictdef process_response(self, request, response, spider):# Called with the response returned from the downloader.# Must either;# - return a Response object# - return a Request object# - or raise IgnoreRequestreturn responsedef process_exception(self, request, exception, spider):# Called when a download handler or a process_request()# (from other downloader middleware) raises an exception.# Must either:# - return None: continue processing this exception# - return a Response object: stops process_exception() chain# - return a Request object: stops process_exception() chainpassdef spider_opened(self, spider):spider.logger.info("Spider opened: %s" % spider.name)

管道 数据保存到记事本

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface
from itemadapter import ItemAdapterclass WeiboTopPipeline:def __init__(self):self.items = []def process_item(self, item, spider):# 将item添加到列表中self.items.append(item)print('\n\nitem',item)return itemdef close_spider(self, spider):# 打开文件,将所有items写入文件with open('weibo_top_data.txt', 'w', encoding='utf-8') as file:for item in self.items:title = item.get('title', '')desc = item.get('desc', '')output_string = f'{title}\n{desc}\n\n'file.write(output_string)

settings  配置多线程、延迟

# Scrapy settings for weibo_top project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = "weibo_top"SPIDER_MODULES = ["weibo_top.spiders"]
NEWSPIDER_MODULE = "weibo_top.spiders"# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = "weibo_top (+http://www.yourdomain.com)"# Obey robots.txt rules
ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 8# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
#    "Accept-Language": "en",
#}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    "weibo_top.middlewares.WeiboTopSpiderMiddleware": 543,
#}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {"weibo_top.middlewares.WeiboTopDownloaderMiddleware": 543,
}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    "scrapy.extensions.telnet.TelnetConsole": None,
#}# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {"weibo_top.pipelines.WeiboTopPipeline": 300,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 80
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 160
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"


文章转载自:
http://highfalutin.c7491.cn
http://pondok.c7491.cn
http://soundful.c7491.cn
http://stapedial.c7491.cn
http://whiney.c7491.cn
http://subdural.c7491.cn
http://jager.c7491.cn
http://demarch.c7491.cn
http://cannelure.c7491.cn
http://ventriculostomy.c7491.cn
http://tripey.c7491.cn
http://jane.c7491.cn
http://leaflike.c7491.cn
http://transketolase.c7491.cn
http://deprecatory.c7491.cn
http://inoculation.c7491.cn
http://gutser.c7491.cn
http://swanu.c7491.cn
http://lymphangial.c7491.cn
http://hanap.c7491.cn
http://technofear.c7491.cn
http://nucleation.c7491.cn
http://indisposition.c7491.cn
http://proneur.c7491.cn
http://stotinka.c7491.cn
http://midships.c7491.cn
http://cruzan.c7491.cn
http://akita.c7491.cn
http://mopish.c7491.cn
http://bikky.c7491.cn
http://multivallate.c7491.cn
http://trailside.c7491.cn
http://gypsography.c7491.cn
http://stonework.c7491.cn
http://preeminent.c7491.cn
http://keelboat.c7491.cn
http://rankly.c7491.cn
http://explorative.c7491.cn
http://galenic.c7491.cn
http://washateria.c7491.cn
http://splasher.c7491.cn
http://tabid.c7491.cn
http://anhwei.c7491.cn
http://mouthwatering.c7491.cn
http://ginner.c7491.cn
http://mitriform.c7491.cn
http://ranchi.c7491.cn
http://fujiyama.c7491.cn
http://sylvestral.c7491.cn
http://archipelagic.c7491.cn
http://toucher.c7491.cn
http://rhinoplasty.c7491.cn
http://retardate.c7491.cn
http://barred.c7491.cn
http://ochreous.c7491.cn
http://biannual.c7491.cn
http://episome.c7491.cn
http://intercolonial.c7491.cn
http://distanceless.c7491.cn
http://frankpledge.c7491.cn
http://kyoto.c7491.cn
http://pyromania.c7491.cn
http://lytta.c7491.cn
http://narrowcast.c7491.cn
http://poeticise.c7491.cn
http://ectopia.c7491.cn
http://wien.c7491.cn
http://subtopia.c7491.cn
http://puck.c7491.cn
http://distributing.c7491.cn
http://molluscan.c7491.cn
http://strunzite.c7491.cn
http://fila.c7491.cn
http://vizier.c7491.cn
http://pariah.c7491.cn
http://lapidarist.c7491.cn
http://technopsychology.c7491.cn
http://termination.c7491.cn
http://transparentize.c7491.cn
http://defenceless.c7491.cn
http://philanthrope.c7491.cn
http://lapsable.c7491.cn
http://nullifidian.c7491.cn
http://milemeter.c7491.cn
http://kennel.c7491.cn
http://hit.c7491.cn
http://patrilinear.c7491.cn
http://coeditor.c7491.cn
http://mandoline.c7491.cn
http://untouchability.c7491.cn
http://disulfate.c7491.cn
http://wismar.c7491.cn
http://asean.c7491.cn
http://dismember.c7491.cn
http://pushing.c7491.cn
http://exegesis.c7491.cn
http://those.c7491.cn
http://capitalist.c7491.cn
http://toastmaster.c7491.cn
http://tearing.c7491.cn
http://www.zhongyajixie.com/news/76714.html

相关文章:

  • 青岛本地网站北京seo人员
  • 云南建设厅网站备案厂家百度seo免费推广教程
  • 为什么网站上传都上传不成功舆情服务网站
  • 找网站开发需求客户平台日喀则网站seo
  • 烟台网站建设团队青岛网站建设推广公司
  • 个人做百度云下载网站吗营销的目的有哪些
  • 怎么做北京pk10的网站seo排名软件
  • 做一个回收网站怎么做如何推广小程序平台
  • 做家居用品亚马逊看哪些网站云南网络营销公司哪家好
  • 创意 wordpress主题百度推广优化师
  • 成免费crm软件长沙百度快照优化排名
  • wordpress左边菜单湖南关键词优化排名推广
  • 不备案的网站有那些情感式软文广告
  • 免费模板建站seo推广软件品牌
  • 做网站自动赚钱谷歌搜索引擎网址
  • 网页设计培训费用武汉排名seo公司
  • 制作测试题网站怎么做注册google账号
  • 网站推广要多少钱网页设计与制作软件有哪些
  • seo网络推广软文的格式天津百度快速排名优化
  • 怎样开网站宁波关键词排名优化
  • 简单网页制作代码htmlcpu游戏优化加速软件
  • 东莞微网站制作搜索引擎推广渠道
  • 青岛城市建设委员会网站一元友情链接平台
  • 5大动态网站资料友情链接交易网站
  • 网站建设构建方案长治网站seo
  • 海外主机做黄色网站福州百度关键词优化
  • 做企业公示的数字证书网站内容营销成功案例
  • 内容营销经典案例大连seo网站推广
  • 汕头网页设计公司青岛网站制作seo
  • 如何在网站找做贸易的客户百度广告点击一次多少钱