- In this tutorial, you'll walk through the main steps of the web scraping process. You'll learn how to write a script that uses Python's requests library to scrape data from a website. You'll also use Beautiful Soup to extract the specific pieces of information that you're interested in.
- The Ultimate Guide For Using Proxies For web Scraping With Python. Python is a high-level programming language that is used for web development, mobile application development, and also for scraping the web. Python is considered as the finest programming language for web scraping because it can handle all the crawling processes smoothly.
- Web Scraping Services If you are want to build a service using web scraping, you might have to dodge IP Blocking as well as all proxy management that can help to scrape E-Commerce Web Scraping.
Python is a high-level programming language that is used for web development, mobile application development, and also for scraping the web.
Python is considered as the finest programming language for web scraping because it can handle all the crawling processes smoothly. When you combine the capabilities of Python with the security of a web proxy, then you can perform all your scraping activities smoothly without the fear of IP banning.
Scrapy is a powerful Python web scraping and web crawling framework. Scrapy provides many features to download web pages asynchronously, process them and save them. It handles multithreading, crawling (the process of going from link to link to find every URL in a website), sitemap crawling, and more. The process for web scraping can be broadly categorized into three steps: Understand and inspect the web page to find the HTML markers associated with the information we want. Use Beautiful Soup.
In this article, you will understand how proxies are used for web scraping with Python. But, first, let’s understand the basics.
What is web scraping?
Web scraping is the method of extracting data from websites. Generally, web scraping is done either by using a HyperText Transfer Protocol (HTTP) request or with the help of a web browser.
Web scraping works by first crawling the URLs and then downloading the page data one by one. All the extracted data is stored in a spreadsheet. You save tons of time when you automate the process of copying and pasting data. You can easily extract data from thousands of URLs based on your requirement to stay ahead of your competitors.
Example of web scraping
An example of a web scraping would be to download a list of all pet parents in California. You can scrape a web directory that lists the name and email ids of people in California who own a pet. You can use web scraping software to do this task for you. The software will crawl all the required URLs and then extract the required data. The extracted data will be kept in a spreadsheet.
Why use a proxy for web scraping?
- Proxy lets you bypass any content related geo-restrictions because you can choose a location of your choice.
- You can place a high number of connection requests without getting banned.
- It increases the speed with which you request and copy data because any issues related to your ISP slowing down your internet speed is reduced.
- Your crawling program can smoothly run and download the data without the risk of getting blocked.
Now that you have understood the basics of web scraping and proxies. Let’s learn how you can perform web scraping using a proxy with the Python programming language.
Configure a proxy for web scraping with Python
Scraping using Python starts by sending an HTTP request. HTTP is based on a client/server model where your Python program (the client) sends a request to the server for seeing the contents of a page and the server returns with a response.
The basic method of sending an HTTP request is to open a socket and send the request manually:
***start of code***
import socket
HOST = ‘www.mysite.com’ # Server hostname or IP address
PORT = 80 # Port
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = (HOST, PORT)
client_socket.connect(server_address)
request_header = b’GET / HTTP/1.0rnHost: www.mysite.comrnrn’
client_socket.sendall(request_header)
response = ”
while True:
recv = client_socket.recv(1024)
if not recv:
break
response += str(recv)
print(response)
client_socket.close()
***end of code***
You can also send HTTP requests in Python using built-in modules like urllib and urllib2. However, using these modules isn’t easy.
Hence, there is a third-option called Request, which is a simple HTTP library for Python.
You can easily configure proxies with Requests.
Here is the code to enable the use of proxy in Requests:
***start of code***
import requests
proxies = {
“http”: “http://10.XX.XX.10:8000”,
“https”: “http://10.XX.XX.10:8000”,
}
r = requests.get(“http://toscrape.com”, proxies=proxies)
***end of code***
In the proxies section, you have to specify the proxy address and the port address.
If you wish to include sessions and use a proxy at the same time, then you need to use the below code:
***start of code***
import requests
s = requests.Session()
s.proxies = {
“http”: “http://10.XX.XX.10:8000”,
“https”: “http://10.XX.XX.10:8000”,
}
r = s.get(“http://toscrape.com”)
***end of code***
Sometimes, you might need to create a new session and add a proxy. To do this, a session object should be created.
***start of code***
import requests
s = requests.Session()
s.proxies = {
“http”: “http://10.XX.XX.10:8000”,
“https”: “http://10.XX.XX.10:8000”,
}
r = s.get(“http://toscrape.com”)
***end of code***
However, using the Requests package might be slow because you can scrape just one URL with every request. Now, imagine you have to scrape 100 URLs then you will have to send the request 100 times only after the previous request is completed.
To solve this problem and to speed up the process, there is another package called the grequests that allows you to send multiple requests at the same time. Grequests is an asynchronous Python API that is widely used for building web apps.
Here is the code that explains the working of grequests. In this code, we are using an array to keep all the URLs to scrape in an array. Let’s suppose we have to scrape 100 URLs. We will keep all the 100 URLs in an array and use the grequest package specifying the batch length to 10. This will require sending just 10 requests to complete the scraping of 100 URLs instead of sending 100 requests.
***start of code***
import grequests
BATCH_LENGTH = 10
# An array having the 100 URLs for scraping
urls = […]
# results will be stored in this empty results array
results = []
while urls:
# this is the first batch of 10 URLs
batch = urls[:BATCH_LENGTH]
# create a set of unsent Requests
rs = (grequests.get(url) for url in batch)
# send all the requests at the same time
batch_results = grequests.map(rs)
# appending results to our main results array
results += batch_results
# removing fetched URLs from urls
urls = urls[BATCH_LENGTH:]
print(results)
# [<Response [200]>, <Response [200]>, …, <Response [200]>, <Response [200]>]
***end of code***
Final Thoughts
Web scraping is a necessity for several businesses, especially eCommerce websites. Real-time data needs to be captured from a variety of sources to make better business decisions at the right time. Python offers different frameworks and libraries that make web scraping easy. You can extract data fast and efficiently. Moreover, it is crucial to use a proxy to hide your machine’s IP address to avoid blacklisting. Python along with a secure proxy should be the base for successful web scraping.
Sometimes we need to extract information from websites. We can extract data from websites by using there available API’s. But there are websites where API’s are not available.
Here, Web scraping comes into play!
Python is widely being used in web scraping, for the ease it provides in writing the core logic. Whether you are a data scientist, developer, engineer or someone who works with large amounts of data, web scraping with Python is of great help.
Without a direct way to download the data, you are left with web scraping in Python as it can extract massive quantities of data without any hassle and within a short period of time.
In this tutorial , we shall be looking into scraping using some very powerful Python based libraries like BeautifulSoup and Selenium.
BeautifulSoup and urllib
BeautifulSoup is a Python library for pulling data out of HTML and XML files. But it does not get data directly from a webpage. So here we will use urllib library to extract webpage.
First we need to install Python web scraping BeautifulSoup4 plugin in our system using following command :
$ sudo pip install BeatifulSoup4
$ pip install lxml
OR
$ sudo apt-get install python3-bs4
$ sudo apt-get install python-lxml
So here I am going to extract homepage from a website https://www.botreetechnologies.com
from urllib.request import urlopen
from bs4 import BeautifulSoup
We import our package that we are going to use in our program. Now we will extract our webpage using following.
response = urlopen('https://www.botreetechnologies.com/case-studies')
Beautiful Soup does not get data directly from content we just extract. So we need to parse it in html/XML data.
data = BeautifulSoup(response.read(),'lxml')
Here we parsed our webpage html content into XML using lxml parser.
As you can see in our web page there are many case studies available. I just want to read all the case studies available here.
There is a title of case studies at the top and then some details related to that case. I want to extract all that information.
We can extract an element based on tag , class, id , Xpath etc.
You can get class of an element by simply right click on that element and select inspect element.
case_studies = data.find('div', { 'class' : 'content-section' })
In case of multiple elements of this class in our page, it will return only first. So if you want to get all the elements having this class use findAll()
method.
case_studies = data.find('div', { 'class' : 'content-section' })
Now we have div having class ‘content-section’ containing its child elements. We will get all <h2> tags to get our ‘TITLE’ and <ul> tag to get all children, the <li>
elements.
case_stud.find('h2').find('a').text
case_stud_details = case_stud.find(‘ul’).findAll(‘li’)
Now we got the list of all children of ul
element.
To get first element from the children list simply write:
case_stud_details[0]
We can extract all attribute of a element . i.e we can get text for this element by using:
case_stud_details[2].text
But here I want to click on the ‘TITLE’ of any case study and open details page to get all information.
Since we want to interact with the website to get the dynamic content, we need to imitate the normal user interaction. Such behaviour cannot be achieved using BeautifulSoup or urllib, hence we need a webdriver to do this.
Webdriver basically creates a new browser window which we can control pragmatically. It also let us capture the user events like click and scroll.
Selenium is one such webdriver.
Selenium Webdriver
Selenium webdriver accepts cthe ommand and sends them to ba rowser and retrieves results.
You can install selenium in your system using fthe ollowing simple command:
$ sudo pip install selenium
In order to use we need to import selenium in our Python script.
from selenium import webdriver
I am using Firefox webdriver in this tutorial. Now we are ready to extract our webpage and we can do this by using fthe ollowing:
self.url = 'https://www.botreetechnologies.com/'
self.browser = webdriver.Firefox()
Now we need to click on ‘CASE-STUDIES’ to open that page.
We can click on a selenium element by using following piece of code:
self.browser.find_element_by_xpath('//div[contains(@id,'navbar')]/ul[2]/li[1]').click()
Now we are transferred to case-studies page and here all the case studies are listed with some information.
Here, I want to click on each case study and open details page to extract all available information.
So, I created a list of links for all case studies and load them one after the other.
Python Web Scraping Pdf
To load previous page you can use following piece of code:
self.browser.execute_script('window.history.go(-1)')
Python Website Scraping
Final script for using Selenium will looks as under:
And we are done, Now you can extract static webpages or interact with webpages using the above script.
Conclusion: Web Scraping Python is an essential Skill to have
Python Web Scraping Services Interview
Today, more than ever, companies are working with huge amounts of data. Learning how to scrape data in Python web scraping projects will take you a long way. In this tutorial, you learn Python web scraping with beautiful soup.
Along with that, Python web scraping with selenium is also a useful skill. Companies need data engineers who can extract data and deliver it to them for gathering useful insights. You have a high chance of success in data extraction if you are working on Python web scraping projects.
If you want to hire Python developers for web scraping, then contact BoTree Technologies. We have a team of engineers who are experts in web scraping. Give us a call today.