-
Notifications
You must be signed in to change notification settings - Fork 0
Use Case Specification: Web Crawler
The web crawler is an important component of our project. In this Use-Case-Specification we specify the main task of this component: Crawling predefined webpages for current prices and saving them into a database.

At the very beginning the crawler process reads it's configuration file and the products to be crawled from the database. If one of both is invalid / not availabel, the process will terminate. If both are valid, the load balancer will distribute the crawling tasks across all registered crawling instances. Each instance will then iterate over the assigned list of products. For each product, it will then iterate over all vendors where this product is available and fetch the price. After all prices have been fetched, the price entries are sent to the database. After all products are crawled, the program is done and can potentially send the status of the execution to some HTTP endpoint.
N/A
N/A
95