Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lära Introduction | Automating Data Collection from Web Sources
Automating Data Collection from Web Sources
course content

Kursinnehåll

Automating Data Collection from Web Sources

book
Introduction

A web scraper is a tool that is used to extract data from websites. It works by sending an HTTP request to a website's server, retrieving the HTML content of the webpage, and then parsing that content to extract the data of interest.

The requests library is a popular Python library used for making HTTP requests. It allows you to send HTTP requests using Python and receive the server's response.

Beautiful Soup is a Python library for parsing HTML and XML documents. It can be used to extract data from HTML content by searching for specific tags, attributes, or patterns in the HTML code.

Together, the requests library and Beautiful Soup can be used to build a web scraper in Python. The requests library is used to send the HTTP request and retrieve the HTML content, while Beautiful Soup is used to parse the HTML and extract the data of interest.

With that being said let's start with our project!

Switch to desktopByt till skrivbordet för praktisk övningFortsätt där du är med ett av alternativen nedan
Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 1. Kapitel 1
AVAILABLE TO ULTIMATE ONLY
Vi beklagar att något gick fel. Vad hände?
some-alt