Use a different scrapy project per set of spiders+pipelines (might be appropriate if your spiders are different enough warrant being in different projects)
On the scrapy tool command line, change the pipeline setting with scrapy settings in between each invocation of your spider
Isolate your spiders into their own scrapy tool commands, and define the default_settings['ITEM_PIPELINES'] on your command class to the pipeline list you want for that command. See line 6 of this example.
In the pipeline classes themselves, have process_item() check what spider it's running against, and do nothing if it should be ignored for that spider. See the example using resources per spider to get you started. (This seems like an ugly solution because it tightly couples spiders and item pipelines. You probably shouldn't use this one.)
Building on the solution from Pablo Hoffman, you can use the following decorator on the process_item method of a Pipeline object so that it checks the pipeline attribute of your spider for whether or not it should be executed. For example:
def check_spider_pipeline(process_item_method):
@functools.wraps(process_item_method)
def wrapper(self, item, spider):
# message template for debugging
msg = '%%s %s pipeline step' % (self.__class__.__name__,)
# if class is in the spider's pipeline, then use the
# process_item method normally.
if self.__class__ in spider.pipeline:
spider.log(msg % 'executing', level=log.DEBUG)
return process_item_method(self, item, spider)
# otherwise, just return the untouched item (skip this step in
# the pipeline)
else:
spider.log(msg % 'skipping', level=log.DEBUG)
return item
return wrapper
For this decorator to work correctly, the spider must have a pipeline attribute with a container of the Pipeline objects that you want to use to process the item, for example:
class MySpider(BaseSpider):
pipeline = set([
pipelines.Save,
pipelines.Validate,
])
def parse(self, response):
# insert scrapy goodness here
return item
And then in a pipelines.py file:
class Save(object):
@check_spider_pipeline
def process_item(self, item, spider):
# do saving here
return item
class Validate(object):
@check_spider_pipeline
def process_item(self, item, spider):
# do validating here
return item
All Pipeline objects should still be defined in ITEM_PIPELINES in settings (in the correct order -- would be nice to change so that the order could be specified on the Spider, too).
The other solutions given here are good, but I think they could be slow, because we are not really not using the pipeline per spider, instead we are checking if a pipeline exists every time an item is returned (and in some cases this could reach millions).
A good way to completely disable (or enable) a feature per spider is using custom_setting and from_crawler for all extensions like this:
pipelines.py
from scrapy.exceptions import NotConfigured
class SomePipeline(object):
def __init__(self):
pass
@classmethod
def from_crawler(cls, crawler):
if not crawler.settings.getbool('SOMEPIPELINE_ENABLED'):
# if this isn't specified in settings, the pipeline will be completely disabled
raise NotConfigured
return cls()
def process_item(self, item, spider):
# change my item
return item
settings.py
ITEM_PIPELINES = {
'myproject.pipelines.SomePipeline': 300,
}
SOMEPIPELINE_ENABLED = True # you could have the pipeline enabled by default
spider1.py
class Spider1(Spider):
name = 'spider1'
start_urls = ["http://example.com"]
custom_settings = {
'SOMEPIPELINE_ENABLED': False
}
As you check, we have specified custom_settings that will override the things specified in settings.py, and we are disabling SOMEPIPELINE_ENABLED for this spider.
Now when you run this spider, check for something like:
[scrapy] INFO: Enabled item pipelines: []
Now scrapy has completely disabled the pipeline, not bothering of its existence for the whole run. Check that this also works for scrapy extensions and middlewares.
You can just set the item pipelines settings inside of the spider like this:
class CustomSpider(Spider):
name = 'custom_spider'
custom_settings = {
'ITEM_PIPELINES': {
'__main__.PagePipeline': 400,
'__main__.ProductPipeline': 300,
},
'CONCURRENT_REQUESTS_PER_DOMAIN': 2
}
I can then split up a pipeline (or even use multiple pipelines) by adding a value to the loader/returned item that identifies which part of the spider sent items over. This way I won’t get any KeyError exceptions and I know which items should be available.
...
def scrape_stuff(self, response):
pageloader = PageLoader(
PageItem(), response=response)
pageloader.add_xpath('entire_page', '/html//text()')
pageloader.add_value('item_type', 'page')
yield pageloader.load_item()
productloader = ProductLoader(
ProductItem(), response=response)
productloader.add_xpath('product_name', '//span[contains(text(), "Example")]')
productloader.add_value('item_type', 'product')
yield productloader.load_item()
class PagePipeline:
def process_item(self, item, spider):
if item['item_type'] == 'product':
# do product stuff
if item['item_type'] == 'page':
# do page stuff
Overriding 'ITEM_PIPELINES' with custom settings per spider, as others have suggested, works well. However, I found I had a few distinct groups of pipelines I wanted to use for different categories of spiders. I wanted to be able to easily define the pipeline for a particular category of spider without a lot of thought, and I wanted to be able to update a pipeline category without editing each spider in that category individually.
So I created a new file called pipeline_definitions.py in the same directory as settings.py. pipeline_definitions.py contains functions like this: