Web Scraping Using Angular
In the previous post about Web Scraping with Python we talked a bit about Scrapy. In this post we are going to dig a little bit deeper into it.
In this article, we'll see how easy it is to perform web scraping using a headless browser. Specifically, we'll see a Puppeteer tutorial that goes through a few examples of how to control Google Chrome to take screenshots and gather structured data. Python angularjs web-scraping beautifulsoup urllib2. Improve this question. Follow edited Jan 28 '16 at 0:33. Asked Jan 28 '16 at 0:20. Stephen Lead Stephen Lead. 1,468 4 4 gold badges 18 18 silver badges 37 37 bronze badges. Introduction The developers, who are keen to know how to make HTTP POST web request using C #, will have all the needful information definitely from this article.Basically, like SMS sending application HTTP post web request is nothing but an internet transaction. Web Scraping a Web App (React, Angular, etc) with Python. What I'm Using## Python. What I'm Trying to Achieve## I am hoping to create a web scraper that will visit an olark chat transcript page, and scrape the chat from the page. The chat transcripts are behind a login, so the scraper will need to login/create a session then get.
Scrapy is a wonderful open source Python web scraping framework. It handles the most common use cases when doing web scraping at scale:
- Multithreading
- Crawling (going from link to link)
- Extracting the data
- Validating
- Saving to different format / databases
- Many more
The main difference between Scrapy and other commonly used librairies like Requests / BeautifulSoup is that it is opinionated. It allows you to solve the usual web scraping problems in an elegant way.
The downside of Scrapy is that the learning curve is steep, there is a lot to learn, but that is what we are here for :)
In this tutorial we will create two different web scrapers, a simple one that will extract data from an E-commerce product page, and a more “complex” one that will scrape an entire E-commerce catalog!
Basic overview
You can install Scrapy using pip. Be careful though, the Scrapy documentation strongly suggests to install it in a dedicated virtual environnement in order to avoid conflicts with your system packages.
I'm using Virtualenv and Virtualenvwrapper:
and
You can now create a new Scrapy project with this command:
This will create all the necessary boilerplate files for the project.
Here is a brief overview of these files and folders:
- items.py is a model for the extracted data. You can define custom model (like a Product) that will inherit the scrapy Item class.
- middlewares.py Middleware used to change the request / response lifecycle. For example you could create a middle ware to rotate user-agents, or to use an API like ScrapingBee instead of doing the requests yourself.
- pipelines.py In Scrapy, pipelines are used to process the extracted data, clean the HTML, validate the data, and export it to a custom format or saving it to a database.
- /spiders is a folder containing Spider classes. With Scrapy, Spiders are classes that define how a website should be scraped, including what link to follow and how to extract the data for those links.
- scrapy.cfg is a configuration file to change some settings
Scraping a single product
In this example we are going to scrape a single product from a dummy E-commerce website. Here is the first the product we are going to scrape:
https://clever-lichterman-044f16.netlify.com/products/taba-cream.1/
We are going to extract the product name, picture, price and description.
Scrapy Shell
Scrapy comes with a built-in shell that helps you try and debug your scraping code in real time. You can quickly test your XPath expressions / CSS selectors with it. It's a very cool tool to write your web scrapers and I always use it!
You can configure Scrapy Shell to use another console instead of the default Python console like IPython. You will get autocompletion and other nice perks like colorized output.
In order to use it in your scrapy Shell, you need to add this line to your scrapy.cfg file:
Once it's configured, you can start using scrapy shell:
We can start fetching a URL by simply:
This will start by fetching the /robot.txt file.
In this case there isn't any robot.txt, that's why we can see a 404 HTTP code. If there was a robot.txt, by default Scrapy will follow the rule.
You can disable this behavior by changing this setting in settings.py:
Then you should should have a log like this:
You can now see your response object, response headers, and try different XPath expression / CSS selectors to extract the data you want.
You can see the response directly in your browser with:
Note that the page will render badly inside your browser, for lots of different reasons. This can be CORS issues, Javascript code that didn't execute, or relative URLs for assets that won't work locally.
The scrapy shell is like a regular Python shell, so don't hesitate to load your favorite scripts/function in it.
Extracting Data
Scrapy doesn't execute any Javascript by default, so if the website you are trying to scrape is using a frontend framework like Angular / React.js, you could have trouble accessing the data you want.
Now let's try some XPath expression to extract the product title and price:
Web Scraping Using Angular 9
In order to extract the price, we are going to use an XPath expression, we're selecting the first span after the div with the class my-4
I could also use a CSS selector:
Creating a Scrapy Spider
With Scrapy, Spiders are classes where you define your crawling (what links / URLs need to be scraped) and scraping (what to extract) behavior.
Here are the different steps used by a spider to scrape a website:
- It starts by looking at the class attribute
start_urls
, and call these URLs with the start_requests() method. You could override this method if you need to change the HTTP verb, add some parameters to the request (for example, sending a POST request instead of a GET). - It will then generate a Request object for each URL, and send the response to the callback function parse()
- The parse() method will then extract the data (in our case, the product price, image, description, title) and return either a dictionnary, an Item object, a Request or an iterable.
You may wonder why the parse method can return so many different objects. It's for flexibility. Let's say you want to scrape an E-commerce website that doesn't have any sitemap. You could start by scraping the product categories, so this would be a first parse method.
This method would then yield a Request object to each product category to a new callback method parse2()For each category you would need to handle pagination Then for each product the actual scraping that generate an Item so a third parse function.
With Scrapy you can return the scraped data as a simple Python dictionary, but it is a good idea to use the built-in Scrapy Item class.It's a simple container for our scraped data and Scrapy will look at this item's fields for many things like exporting the data to different format (JSON / CSV…), the item pipeline etc.
So here is a basic Product class:
Now we can generate a spider, either with the command line helper:
Or you can do it manually and put your Spider's code inside the /spiders directory.
There are different types of Spiders in Scrapy to solve the most common web scraping use cases:
Spider
that we will use. It takes a start_urls list and scrape each one with aparse
method.CrawlSpider
follows links defined by a set of rulesSitemapSpider
extract URLs defined in a sitemap- Many more
In this EcomSpider class, there are two required attributes:
name
which is our Spider's name (that you can run usingscrapy runspider spider_name
)start_urls
which is the starting URL
The allowed_domains
is optionnal but important when you use a CrawlSpider that could follow links on different domains.
Then I've just populated the Product fields by using XPath expressions to extract the data I wanted as we saw earlier, and we return the item.
You can run this code as follow to export the result into JSON (you could also export to CSV)
You should then get a nice JSON file:
Item loaders
There are two common problems that you can face while extracting data from the Web:
- For the same website, the page layout and underlying HTML can be different. If you scrape an E-commerce website, you will often have a regular price and a discounted price, with different XPath / CSS selectors.
- The data can be dirty and need some kind of post processing, again for an E-commerce website it could be the way the prices are displayed for example ($1.00, $1, $1,00 )
Scrapy comes with a built-in solution for this, ItemLoaders.It's an interesting way to populate our Product object.
You can add several XPath expression to the same Item field, and it will test it sequentially. By default if several XPath are found, it will load all of them into a list.
You can find many examples of input and output processors in the Scrapy documentation.
It's really useful when you need to transorm/clean the data your extract.For example, extracting the currency from a price, transorming a unit into another one (centimers in meters, Celcius degres in Fahrenheit) …
In our webpage we can find the product title with different XPath expressions: //title
and //section[1]//h2/text()
Here is how you could use and Itemloader in this case:
Generally you only want the first matching XPath, so you will need to add this output_processor=TakeFirst()
to your item's field constructor.
In our case we only want the first matching XPath for each field, so a better approach would be to create our own ItemLoader and declare a default output_processor to take the first matching XPath:
I also added a price_in
which is an input processor to delete the dollar sign from the price. I'm using MapCompose
which is a built-in processor that takes one or several functions to be executed sequentially. You can add as many functions as you like for . The convention is to add _in
or _out
to your Item field's name to add an input or output processor to it.
There are many more processors, you can learn more about this in the documentation
Scraping multiple pages
Web Scraping Using Angular 7
Now that we know how to scrape a single page, it's time to learn how to scrape multiple pages, like the entire product catalog.As we saw earlier there are different kinds of Spiders.
When you want to scrape an entire product catalog the first thing you should look at is a sitemap. Sitemap are exactly built for this, to show web crawlers how the website is structured.
Most of the time you can find one at base_url/sitemap.xml
. Parsing a sitemap can be tricky, and again, Scrapy is here to help you with this.
In our case, you can find the sitemap here: https://clever-lichterman-044f16.netlify.com/sitemap.xml
If we look inside the sitemap there are many URLs that we are not interested by, like the home page, blog posts etc:
Fortunately, we can filter the URLs to parse only those that matches some pattern, it's really easy, here we only to have URL thathave /products/
in their URLs:
You can run this spider as follow to scrape all the products and export the result to a CSV file:scrapy runspider sitemap_spider.py -o output.csv
Now what if the website didn't have any sitemap? Once again, Scrapy has a solution for this!
Let me introduce you to the… CrawlSpider
.
The CrawlSpider will crawl the target website by starting with a start_urls
list. Then for each url, it will extract all the links based on a list of Rule
.In our case it's easy, products has the same URL pattern /products/product_title
so we only need filter these URLs.
As you can see, all these built-in Spiders are really easy to use. It would have been much more complex to do it from scratch.
With Scrapy you don't have to think about the crawling logic, like adding new URLs to a queue, keeping track of already parsed URLs, multi-threading…
Conclusion
In this post we saw a general overview of how to scrape the web with Scrapy and how it can solve your most common web scraping challenges. Of course we only touched the surface and there are many more interesting things to explore, like middlewares, exporters, extensions, pipelines!
If you've been doing web scraping more “manually” with tools like BeautifulSoup / Requests, it's easy to understand how Scrapy can help save time and build more maintainable scrapers.
I hope you liked this Scrapy tutorial and that it will motivate you to experiment with it.
For further reading don't hesitate to look at the great Scrapy documentation.
We have also published our custom integration with Scrapy, it allows you to execute Javascript with Scrapy, do not hesitate to check it out.
You can also check out our web scraping with Python tutorial to learn more about web scraping.
Happy Scraping!
In a perfect world, every website provides free access to data with an easy-to-use API… but the world is far from perfect. However, it is possible to use web scraping techniques to manually extract data from websites by brute force. The following lesson examines two different types of web scrapers and implements them with NodeJS and Firebase Cloud Functions.
Frontend Integrations
Web Scraping Using Angular 5
This lesson is integrated with multiple frontend frameworks. Choose your favorite flavor 🍧.
Initial Setup
Let’s start by initializing Firebase Cloud Functions with JavaScript.
Web Scraping Using Angularjs
Strategy A - Basic HTTP Request
The first strategy makes an HTTP request to a URL and expects an HTML document string as the response. Retrieving the HTML is easy, but there are no browser APIs in NodeJS, so we need a tool like cheerio to process DOM elements and find the necessary metatags.
The advantage 👍 of this approach is that it is fast and simple, but the disadvantage 👎 is that it will not execute JavaScript and/or wait for dynamically rendered content on the client.
Link Preview Function
💡 It is not possible to generate link previews entirely from the frontend due to Cross-Site Scripting vulnerabilities.
An excellent use-case for this strategy is a link preview service that shows the name, description, and image of a 3rd party website when a URL posted into an app. For example, when you post a link into an app like Twitter, Facebook, or Slack, it renders out a nice looking preview.
Link previews are made possible by scraping the meta tags from <head>
of an HTML page. The code requests a URL, then looks for Twitter and OpenGraph metatags in the response body. Several supporting libraries are used to make the code more reliable and simple.
- cheerio is a NodeJS implementation of jQuery.
- node-fetch is a NodeJS implementation of the browser Fetch API.
- get-urls is a utility for extracting URLs from text.
Let’s start by building a
HTTP Function
You can use the scraper in an HTTP Cloud Function.
At this point, you should receive a response by opening http://localhost:5000/YOUR-PROJECT/REGION/scraper
Strategy B - Puppeteer for Full Browser Rendering
What if you want to scrape a single page JavaScript app, like Angular or React? Or maybe you want to click buttons and/or log into an account before scraping? These tasks require a fully emulated browser environment that can parse JS and handle events.
Puppeteer is a tool built on top of headless chrome, which allows you to run the Chrome browser on the server. In other words, you can fully interact with a website before extracting the data you need.
Instagram Scraper
Instagram on the web uses React, which means we won’t see any dynamic content util the page is fully loaded. Puppeteer is available in the Clould Functions runtime, allowing you to spin up a chrome browser on your server. It will render JavaScript and handle events just like the browser you’re using right now.
First, the function logs into a real instagram account. The page.type
method will find the cooresponding DOM element and type characters into it. Once logged in, we navigate to a specific username and wait for the img
tags to render on the screen, then scrape the src attribute from them.