Extracting data from the Internet with Scrapy

Hello!

Cofounder of

I am Israël Hallé

Security Enthusiast

Malware Analyst

NorthSec Workshop

Vulnerabilities Research

Rock

Climber!

What is Scraping?

To extract data by automated means from a format not intended to be machine-readable, such as a screenshot or a formatted web page - Wiktionary

Why Scraping?

  • Prices comparator
  • Training corpus
  • Monitor communities
  • Aggregate differents sources

Is this legal?

It Depends.

IANL

I Am Not a Lawyer

9

Is This Legal?

  • Canadian company got sued
  • ToS are enforceable
  • Ask permission
  • Do no harm
  • Get a lawyer

10

1, 2

What is

Scraping + Python

Battery Included

  • HTTP Proxy
  • Cookies
  • Retries
  • Redirects
  • Robots.txt
  • Data format and storage
  • And much more...

12

Extensible

Widely Used and Stable

Fast by Default!

Let’s Build Something

So Many [Great] Conferences

Conferencify.io

  • Follow conferences
  • Follow speakers
  • Notify of changes
  • Efficient schedule
  • Discover relevant conferences

It Needs Data!

No API :(

Let’s Scrape Them!

First Thing First

$ sendmail info@confoo.ca

22

Our First Project

$ pip install scrapy

$ scrapy startproject confs

$ scrapy genspider -t crawl \
confoo confoo.ca

23

Be Polite

$ vim settings.py

USER_AGENT= \
'confs (+http://confoo.isra17.xyz)'

24

Be Polite

$ vim settings.py

USER_AGENT= \
'confs (+http://confoo.isra17.xyz)'

DOWNLOAD_DELAY = 3

25

Our First Spider

Our First Spider

$ vim spiders/confoo.py

$ scrapy crawl confoo

DEBUG: Crawled (200) <GET https://confoo.ca/robots.txt>

[...]

27

Our First Spider

$ vim spiders/confoo.py

$ scrapy crawl confoo

[...]

DEBUG: Redirecting (301) [...]

DEBUG: Crawled (200) <GET https://confoo.ca/en>

28

Find our Starts URL

Listings of what we are looking for

https://confoo.ca/en/yul2017/sessions

Start URLs

Spider yields Items or Requests

class ConfooSpider(CrawlSpider):
name
= 'confoo'
allowed_domains
= ['confoo.ca']
start_urls
= [
'https://confoo.ca/en/yul2018/sessions'
'https://confoo.ca/en/yul2018/speakers'
]

Crawling Spiders

Spider yields Items or Requests

def parse(self, response):
yield {
'title': response.css('title::text')\.
.extract_first()
}
for link in response.xpath('a::attr(href)'):
yield response.follow(link)

Crawling Spiders

Helpers available such as Rules

rules = (
Rule(
LinkExtractor(
allow=r'/en/yul2018/session/'),
callback='parse_session'),)

Our first Crawl

$ scrapy crawl confoo

[...]

DEBUG: Scraped from <200 https://confoo.ca/en/yul2018/session/10-things-the-media-hasn-t-told-you-about-react-native>

34

Scraping Content

Scrapy Items

  • Use Item class
  • Check for Typos
  • Work with other Scrapy tools
  • Quack like a Dict

Our First Item

class Session(scrapy.Item):
id = scrapy.Field()
edition = scrapy.Field()
title = scrapy.Field()
summary = scrapy.Field()
tags = scrapy.Field()
scheduled_at = scrapy.Field()
language = scrapy.Field()
level = scrapy.Field()

Scraping Content

  • As simple as yielding Items
  • Can extract properties from response.url or response.body for raw content

yield SomeItem(
id=re.search(r'/(\d+)$',response.url)\

.group(1),
data=json.loads(response.body)['data'],
)

What is HTML

Image source: http://pautasso.info/lectures/w13/sa3/3-js/javascript-html5.html

You can't parse [X]HTML with regex. Because HTML can't be parsed by regex. Regex is not a tool that can be used to correctly parse HTML. As I have answered in HTML-and-regex questions here so many times before, the use of regex will not allow you to consume HTML. Regular expressions are a tool that is insufficiently sophisticated to understand the constructs employed by HTML. HTML is not a regular language and hence cannot be parsed by regular expressions. Regex queries are not equipped to break down HTML into its meaningful parts. so many times but it is not getting to me. Even enhanced irregular regular expressions as used by Perl are not up to the task of parsing HTML. You will never make me crack. HTML is a language of sufficient complexity that it cannot be parsed by regular expressions. Even Jon Skeet cannot parse HTML using regular expressions. Every time you attempt to parse HTML with regular expressions, the unholy child weeps the blood of virgins, and Russian hackers pwn your webapp. Parsing HTML with regex summons tainted souls into the realm of the living. HTML and regex go together like love, marriage, and ritual infanticide. The <center> cannot hold it is too late. The force of regex and HTML together in the same conceptual space will destroy your mind like so much watery putty. If you parse HTML with regex you are giving in to Them and their blasphemous ways which doom us all to inhuman toil for the One whose Name cannot be expressed in the Basic Multilingual Plane, he comes. HTML-plus-regexp will liquify the n​erves of the sentient whilst you observe, your psyche withering in the onslaught of horror. Rege̿̔̉x-based HTML parsers are the cancer that is killing StackOverflow it is too late it is too late we cannot be saved the trangession of a chi͡ld ensures regex will consume all living tissue (except for HTML which it cannot, as previously prophesied) dear lord help us how can anyone survive this scourge using regex to parse HTML has doomed humanity to an eternity of dread torture and security holes using regex as a tool to process HTML establishes a breach between this world and the dread realm of c͒ͪo͛ͫrrupt entities (like SGML entities, but more corrupt) a mere glimpse of the world of reg​ex parsers for HTML will ins​tantly transport a programmer's consciousness into a world of ceaseless screaming, he comes, the pestilent slithy regex-infection wil​l devour your HT​ML parser, application and existence for all time like Visual Basic only worse he comes he comes do not fi​ght he com̡e̶s, ̕h̵i​s un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo​͟ur eye͢s̸ ̛l̕ik͏e liq​uid pain, the song of re̸gular exp​ression parsing will exti​nguish the voices of mor​tal man from the sp​here I can see it can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful t​he final snuffing of the lie​s of Man ALL IS LOŚ͖̩͇̗̪̏̈́T ALL I​S LOST the pon̷y he comes he c̶̮omes he comes the ich​or permeates all MY FACE MY FACE ᵒh god no NO NOO̼O​O NΘ stop the an​*̶͑̾̾​̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨe̠̅s ͎a̧͈͖r̽̾̈́͒͑e n​ot rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳ TH̘Ë͖́̉ ͠P̯͍̭O̚​N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘ ̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅ

XPath or CSS

.article > h1 + span

input[type=”hidden”]

//h4[starts-with(./text(), 'Comments ')]
/following-sibling::table

//table//td[3]/text()

Scraping Content

  • response.xpath(p) for XPath
  • response.css(p) for CSS
  • Support chaining (.xpath(p1).css(p2))
  • Finally, use .extract() or .re(regex)

response.css('#name::text').extract_first()

response.xpath("//h3/text()").re(r'\d: (.+)')

response.css('.article').xpath('.//img[@alt]')\
.extract()

Writing Selectors

Expectation

<article>
<header>
<h1>HTML is nice</h1>
<div class="author">Israel</div>
<div class="posted-at">2018-02-01</div>
</header>
<p>This is the article content</p>
</article>

Reality

<div>
<p class="big">HTML is nice</p>
<span class="small">Israel</span>
<br/>
Posted at
<span class="small">2018-02-01</span>
<br/><br/>
This is the article content
<div>

A Confoo Page

Title

Summary

Speaker

Level

Language

Tags

Time

REPL Oriented Programming

Interactive Scraping

$ scrapy shell --spider confoo $MY_URL

>>> view(response)

Interactive Scraping

<div class="...">
<h1>Extracting data from …</h1>
</div>

>>> response.css('h1::text')\

.extract_first()

'Extracting data from the…'

Interactive Scraping

<div class="e-description">
While exposing data to…
</div>

>>> response\

.css('.e-description::text')\

.extract_first()

'While exposing data to…'

Interactive Scraping

<p>
<span class="dt-date">
March 7, 2018
</span> @
<span class="dt-time">10:00</span>
<
br>
<b class="p-room">Fontaine E</b>
<
br>
English session - Beginner
</p>

Interactive Scraping

<p>
[...]English session - Beginner
</p>

>>> response.css('.well p')\

.re(r'(\S+) '\
'session
\s+-\s+(\S+)')

['English', 'Beginner']

Interactive Scraping

<div class="well">
<p>
<span class="... {'id':'js'}">JavaScript</span>
<span class="... {'id':'mobile'}">Mobile</span>
</p>

>>> 'tags': response\
.css(
'.well .tag::attr(class)')\
.re(
r":'([^']+)'\}"),

['js', 'mobile']

Putting it all Together

def parse_session(self, response):
language, level
= response.css('.well p')\
.re(
r'(\S+)\s+session\s+-\s+(\S+)')
return {
'id': response.url.strip('/').split('/')[-1],
'title': response.css('h1::text').extract_first(),
'summary': response.css('.e-description::text')\
.extract_first()\
.strip(),
'tags': response.css('.well .tag::attr(class)')\
.re(
r":'([^']+)'\}"),
'scheduled_at': datetime.strptime(
' '.join(response.css('.dt-date::text, .dt-time::text')\
.extract()),
'%B %d, %Y %H:%M'),
'location': response.css('.well .p-room::text').extract_first(),
'language': language,
'level': level,
'speakers': response.css('.speakers .btn')\
.xpath(
"./self::a[.='Read More']/@href")\
.re(
'/([^/]+)$'),
}

Saving our Work

Scrapy Pipelines

  • Post-processes items from Spiders
  • Use its own Threadpool
  • Can Modify, Drop or Read items

Our First Pipeline

class ConfsPipeline(object):

def open_spider(self, spider):
self.db = sqlite3.connect('./confs.db')

Our First Pipeline

def process_item(self, item, spider):
with self.db:
if isinstance(item, Session):
self.process_session(item)
elif isinstance(item, Speaker):
self.process_speaker(item)
return item

Our First Pipeline

def process_speaker(self, item):
self.db.execute(INSERT_SPEAKER, (
item[
'id'],
item[
'fullname'],
item[
'bio'],
item[
'country'],
item[
'personal_url'],
item.get(
'facebook'),
item.get(
'twitter'),
item.get(
'flickr'),
))

To Go Deeper

  • Contracts
  • Middlewares
  • Extension / Signals
  • Monitoring
  • Broadcrawl Frameworks

That’s It!

~ 200 LOC to export confoo.ca into a SQL database

Speakers Countries

Subjects

Subjects over Time

The End

Thanks!

Any questions?

Find the slides at http://confoo.isra17.xyz

You can find me at:

@IsraelHalle

isra017@gmail.com

Code at https://github.com/isra17/confoo18-scrapy

Confoo - Scrapy - Google Slides