After an item has been scraped by a spider it is sent to the Item Pipeline which process it through several components that are executed sequentially.
Item pipeline are usually implemented on each project. Typical usage for item pipelines are:
Writing your own item pipeline is easy. Each item pipeline component is a single Python class that must define the following method:
Parameters: |
|
---|
This method is called for every item pipeline component and must either return a Item (or any descendant class) object or raise a DropItem exception. Dropped items are no longer processed by further pipeline components.
Let’s take a look at following hypothetic pipeline that adjusts the price attribute for those items that do not include VAT (price_excludes_vat attribute), and drops those items which don’t contain a price:
from scrapy.core.exceptions import DropItem
class PricePipeline(object):
vat_factor = 1.15
def process_item(self, spider, item):
if item['price']:
if item['price_excludes_vat']:
item['price'] = item['price'] * self.vat_factor
return item
else:
raise DropItem("Missing price in %s" % item)
To activate an Item Pipeline component you must add its class to the ITEM_PIPELINES list, like in the following example:
ITEM_PIPELINES = [
'myproject.pipeline.PricePipeline',
]
Sometimes you need to keep resources about the items processed grouped per spider, and delete those resource when a spider finish.
An example is a filter that looks for duplicate items, and drops those items that were already processed. Let say that our items has an unique id, but our spider returns multiples items with the same id:
from scrapy.xlib.pydispatch import dispatcher
from scrapy.core import signals
from scrapy.core.exceptions import DropItem
class DuplicatesPipeline(object):
def __init__(self):
self.duplicates = {}
dispatcher.connect(self.spider_opened, signals.spider_opened)
dispatcher.connect(self.spider_closed, signals.spider_closed)
def spider_opened(self, spider):
self.duplicates[spider] = set()
def spider_closed(self, spider):
del self.duplicates[spider]
def process_item(self, spider, item):
if item.id in self.duplicates[spider]:
raise DropItem("Duplicate item found: %s" % item)
else:
self.duplicates[spider].add(item.id)
return item
Here is a list of item pipelines bundled with Scrapy.
This pipeline exports all scraped items into a file, using different formats.
It is simple but convenient wrapper to use Item Exporters as Item Pipelines. If you need more custom/advanced functionality you can write your own pipeline or subclass the Item Exporters .
It supports the following settings:
If any mandatory setting is not set, this pipeline will be automatically disabled.
Here are some usage examples of the File Export Pipeline.
To export all scraped items into a XML file:
EXPORT_FORMAT = 'xml'
EXPORT_FILE = 'scraped_items.xml'
To export all scraped items into a CSV file (with all fields in headers line):
EXPORT_FORMAT = 'csv'
EXPORT_FILE = 'scraped_items.csv'
To export all scraped items into a CSV file (with specific fields in headers line):
EXPORT_FORMAT = 'csv_headers'
EXPORT_FILE = 'scraped_items_with_headers.csv'
EXPORT_FILEDS = ['name', 'price', 'description']
The format to use for exporting. Here is a list of all available formats. Click on the respective Item Exporter to get more info.
This setting is mandatory in order to use the File Export Pipeline.
The name of the file where the items will be exported. This setting is mandatory in order to use the File Export Pipeline.
Default: None
The name of the item fields that will be exported. This will be use for the fields_to_export Item Exporter attribute. If None, all fields will be exported.
Whether to export empty (non populated) fields. This will be used for the export_empty_fields Item Exporter attribute.