This project is a web scraping application built using Selenium. It automates the process of searching for items on a specified website, scrolling through lazy-loaded content, and extracting relevant data such as card name, price, category, rarity, and ID number.
selenium-scraper-app
βββ src
β βββ main.py # Entry point of the application
β βββ scraper # Module for scraping logic
β β βββ __init__.py # Initialization file for the scraper module
β β βββ scraper.py # Contains the Scraper class with scraping methods
β βββ utils # Module for utility functions
β β βββ __init__.py # Initialization file for the utils module
β β βββ helpers.py # Contains helper functions for the scraper
βββ requirements.txt # Lists project dependencies
βββ .env # Contains environment variables
βββ README.md # Documentation for the project
-
Clone the repository:
git clone https://github.com/yourusername/selenium-scraper-app.git cd selenium-scraper-app -
Install the required dependencies:
pip install -r requirements.txt
-
Configure environment variables:
- Update the
.envfile with any required environment variables.
- Update the
To run the application, execute the following command:
python src/main.py- π Automated Searching: Searches for items on a specified website.
- π Lazy Loading Support: Scrolls through lazy-loaded content to ensure all items are loaded.
- π Data Extraction: Extracts detailed information about each item, including:
- π Card Name
- π² Price
- π Category
- β Rarity
- π ID Number
This project is licensed under the MIT License.
For questions or support, please contact kisstudios98@gmail.com.