Frequently Asked Questions — Scrapy 0.15.1 documentation

Frequently Asked Questions¶

How does Scrapy compare to BeautifulSoup or lxml?¶

BeautifulSoup and lxml are libraries for parsing HTML and XML. Scrapy is
an application framework for writing web spiders that crawl web sites and
extract data from them.

Scrapy provides a built-in mechanism for extracting data (called
selectors) but you can easily use BeautifulSoup
(or lxml) instead, if you feel more comfortable working with them. After
all, they’re just parsing libraries which can be imported and used from any
Python code.

In other words, comparing BeautifulSoup (or lxml) to Scrapy is like
comparing jinja2 to Django.

What Python versions does Scrapy support?¶

Scrapy runs in Python 2.6 and 2.7.

Does Scrapy work with Python 3.0?¶

No, and there are no plans to port Scrapy to Python 3.0 yet. At the moment,
Scrapy works with Python 2.6 and 2.7.

See also

What Python versions does Scrapy support?.

Did Scrapy “steal” X from Django?¶

Probably, but we don’t like that word. We think Django is a great open source
project and an example to follow, so we’ve used it as an inspiration for
Scrapy.

We believe that, if something is already done well, there’s no need to reinvent
it. This concept, besides being one of the foundations for open source and free
software, not only applies to software but also to documentation, procedures,
policies, etc. So, instead of going through each problem ourselves, we choose
to copy ideas from those projects that have already solved them properly, and
focus on the real problems we need to solve.

We’d be proud if Scrapy serves as an inspiration for other projects. Feel free
to steal from us!

Does Scrapy work with HTTP proxies?¶

Yes. Support for HTTP proxies is provided (since Scrapy 0.8) through the HTTP
Proxy downloader middleware. See
HttpProxyMiddleware.

Scrapy crashes with: ImportError: No module named win32api¶

You need to install pywin32 because of this Twisted bug.

How can I simulate a user login in my spider?¶

See Using FormRequest.from_response() to simulate a user login.

Does Scrapy crawl in breath-first or depth-first order?¶

By default, Scrapy uses a LIFO queue for storing pending requests, which
basically means that it crawls in DFO order. This order is more convenient
in most cases. If you do want to crawl in true BFO order, you can do it by
setting the following settings:

DEPTH_PRIORITY = 1
SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'

My Scrapy crawler has memory leaks. What can I do?¶

See Debugging memory leaks.

Also, Python has a builtin memory leak issue which is described in
Leaks without leaks.

How can I make Scrapy consume less memory?¶

See previous question.

Can I use Basic HTTP Authentication in my spiders?¶

Yes, see HttpAuthMiddleware.

Why does Scrapy download pages in English instead of my native language?¶

Try changing the default Accept-Language request header by overriding the
DEFAULT_REQUEST_HEADERS setting.

Where can I find some example Scrapy projects?¶

See Examples.

Can I run a spider without creating a project?¶

Yes. You can use the runspider command. For example, if you have a
spider written in a my_spider.py file you can run it with:

scrapy runspider my_spider.py

See runspider command for more info.

I get “Filtered offsite request” messages. How can I fix them?¶

Those messages (logged with DEBUG level) don’t necessarily mean there is a
problem, so you may not need to fix them.

Those message are thrown by the Offsite Spider Middleware, which is a spider
middleware (enabled by default) whose purpose is to filter out requests to
domains outside the ones covered by the spider.

For more info see:
OffsiteMiddleware.

What is the recommended way to deploy a Scrapy crawler in production?¶

See Scrapy Service (scrapyd).

Can I use JSON for large exports?¶

It’ll depend on how large your output is. See this warning in JsonItemExporter
documentation.

Can I return (Twisted) deferreds from signal handlers?¶

Some signals support returning deferreds from their handlers, others don’t. See
the Built-in signals reference to know which ones.

What does the response status code 999 means?¶

999 is a custom reponse status code used by Yahoo sites to throttle requests.
Try slowing down the crawling speed by using a download delay of 2 (or
higher) in your spider:

class MySpider(CrawlSpider):name = 'myspider'DOWNLOAD_DELAY = 2# [ ... rest of the spider code ... ]

Or by setting a global download delay in your project with the
DOWNLOAD_DELAY setting.

Can I call pdb.set_trace() from my spiders to debug them?¶

Yes, but you can also use the Scrapy shell which allows you too quickly analyze
(and even modify) the response being processed by your spider, which is, quite
often, more useful than plain old pdb.set_trace().

For more info see Invoking the shell from spiders to inspect responses.

Simplest way to dump all my scraped items into a JSON/CSV/XML file?¶

To dump into a JSON file:

scrapy crawl myspider -o items.json -t json

To dump into a CSV file:

scrapy crawl myspider -o items.csv -t csv

To dump into a XML file:

scrapy crawl myspider -o items.xml -t xml

For more information see Feed exports

What’s this huge cryptic __VIEWSTATE parameter used in some forms?¶

The __VIEWSTATE parameter is used in sites built with ASP.NET/VB.NET. For
more info on how it works see this page. Also, here’s an example spider
which scrapes one of these sites.

What’s the best way to parse big XML/CSV data feeds?¶

Parsing big feeds with XPath selectors can be problematic since they need to
build the DOM of the entire feed in memory, and this can be quite slow and
consume a lot of memory.

In order to avoid parsing all the entire feed at once in memory, you can use
the functions xmliter and csviter from scrapy.utils.iterators
module. In fact, this is what the feed spiders (see Spiders) use
under the cover.

Does Scrapy manage cookies automatically?¶

Yes, Scrapy receives and keeps track of cookies sent by servers, and sends them
back on subsequent requests, like any regular web browser does.

For more info see Requests and Responses and CookiesMiddleware.

How can I see the cookies being sent and received from Scrapy?¶

Enable the COOKIES_DEBUG setting.

How can I instruct a spider to stop itself?¶

Raise the CloseSpider exception from a callback. For
more info see: CloseSpider.

How can I prevent my Scrapy bot from getting banned?¶

Some websites implement certain measures to prevent bots from crawling them,
with varying degrees of sophistication. Getting around those measures can be
difficult and tricky, and may sometimes require special infrastructure.

Here are some tips to keep in mind when dealing with these kind of sites:

  • rotate your user agent from a pool of well-known ones from browsers (google
    around to get a list of them)
  • disable cookies (see COOKIES_ENABLED) as some sites may use
    cookies to spot bot behaviour
  • use download delays (2 or higher). See DOWNLOAD_DELAY setting.
  • is possible, use Google cache to fetch pages, instead of hitting the sites
    directly
  • use a pool of rotating IPs. For example, the free Tor project.

If you are still unable to prevent your bot getting banned, consider contacting
commercial support.

Frequently Asked Questions — Scrapy 0.15.1 documentation相关推荐

  1. BSP TREE FREQUENTLY ASKED QUESTIONS

    BSP TREE FREQUENTLY ASKED QUESTIONS (FAQ) __________________________________________________________ ...

  2. Frequently Asked Questions About CC

    Frequently Asked Questions Wed Aug 29 23:24:20 BST 2018 About CC (注;点击问题,答案自动弹) What is Creative Com ...

  3. 06 Frequently Asked Questions (FAQ) 常见问题解答 (常见问题)

    Frequently Asked Questions (FAQ) Origins 起源 What is the purpose of the project? What is the history ...

  4. Mosquitto 0 15 开源MQTT v3 1 Broker

    分享一下我老师大神的人工智能教程!零基础,通俗易懂!http://blog.csdn.net/jiangjunshow 也欢迎大家转载本篇文章.分享知识,造福人民,实现我们中华民族伟大复兴! Mosq ...

  5. 开源CAD/CAE工具FreeCAD 0.15.4671 Win32_64 2CD

    开源CAD/CAE工具FreeCAD 0.15.4671 Win32_64 2CD FreeCAD是一个基于OpenCASCADE的开源CAD/CAE工具. OpenCASCADE是一套开源的CAD/ ...

  6. mysql8 mac 忘记密码_mac下 MySql 8.0.15忘记密码重置密码

    Mysql最新版跟老版用法不一样了,重置密码的方法也改变了 1.忘记密码了就需要先免登录进入数据库 进入到mysql目录下: cd /usr/local/mysql/bin/ sudo su 终端出现 ...

  7. Excel 计算除法并显示为万分之几,如0.15‱

    Excel 计算除法并显示为万分之几,如0.15‱ 1. 万分号插入 ‱ 2. 计算俩个列的除法,保留2位小数并用万分之几显示: 3. 优化,改成对J2列值的绝对引用 Excel中计算除法保留为 % ...

  8. Marathon 0.15: 更稳定 更多数据 更易用

    Marathon 是数据中心操作系统(DCOS)上的原生容器编排和应用管理平台,正式推出的0.15版增加了许多新功能,并进一步提升了性能.监控和用户界面等方面的表现. 数人云基于Mesos技术,使用M ...

  9. 《Adobe Illustrator CC 2014中文版经典教程(彩色版)》—第1课0.15节创建剪切蒙版...

    本节书摘来自异步社区<Adobe Illustrator CC 2014中文版经典教程(彩色版)>一书中的第1课0.15节创建剪切蒙版,作者[美]Brian Wood,更多章节内容可以访问 ...

最新文章

  1. Eclipse安装SVN教程
  2. python教程:文件读写
  3. Mybatis映射文件SQL语句模糊查询,#和$的区别和注意事项
  4. 【Python 标准库学习】容器数据类型库 — collections
  5. 仲裁服务器装什么系统,Windows 2008故障转移集群之仲裁配置
  6. Android 原生 MediaPlayer 和 MediaCodec 的区别和联系(二)
  7. ASP.NET读取XML文件
  8. Protel99se基本教程 Protel 99SE从零开始学习教程视频教程
  9. flutter 弹幕 yzl_flutter_bulletchat的使用
  10. w ndows7怎么设置打印机,windows7中如何设置打印机纸张大小 以241-2纸张为例
  11. 【电脑使用】误删Win10自带应用如何恢复
  12. 看了几个技术入股的帖子,忍不住写个自己亲身经历吧
  13. 2020计算机预推免(保研边缘人) | 重大、北邮、浙大软院、大连理工、华东师范、同济
  14. 印度加强网络管理或部署网猫软件屏蔽Facebook
  15. 离散随机变量和连续随机变量_随机变量深度崩溃课程
  16. Revit 命令添加下拉框
  17. 蒙氏教具 色板盒 颜色认知三段卡蒙氏教具
  18. Navicat Premium 12 安装使用
  19. ANSYS学习4——后处理
  20. 利用SVD求得两个对应点集合的旋转矩阵R和转移矩阵t的数学推导

热门文章

  1. Java线程安全策略
  2. 线程安全之原子性Atomic(AtomicInteger|LongAdder|AtomicLong)
  3. ROS,launch学习
  4. 转:CentOS系统yum源配置修改、yum安装软件包源码包出错解决办法!
  5. 验证(Authentication)和授权(Authorization)(一):
  6. android viewpager切换无法显示fragment问题
  7. SELinux 基础命令
  8. 院校多媒体门户/学科网站建设解决方案
  9. 线程进程通信和同步方式
  10. 在线mod计算机,计算机系中有关mod的常识(全).doc