This project demonstrates an MLflow setup with two components:
- Deployment Folder: Contains a
Dockerfileandlaunch.shfor deploying the MLflow server. - Experimentation Folder: Contains a Jupyter Notebook to showcase experiments and MLflow tracking.
mlflow-demo/
├── deployment/
│ ├── Dockerfile
│ ├── launch.sh
├── experimentation/
│ ├── mlflow_experiment_notebook.ipynb
├── README.md
Navigate to the deployment/ folder and build the Docker image:
docker build . -t mlflowserverlocalRun the container locally with the following command:
docker run \
-p 5000:5000 \
-v $(pwd)/:$(pwd)/ \
mlflowserverlocal- Port Mapping: Maps the container's internal port 5000 to your local available machine's port 5000.
- Volume Mapping: Mounts the current directory to the container, enabling file sharing.
Once the container is running, you can access the MLflow UI at:
http://localhost:5000
To deploy this project on the cloud:
Tag your image with your registry URL and push it:
docker tag mlflowserverlocal <your-cloud-registry>/mlflowserver
docker push <your-cloud-registry>/mlflowserver-
Cloud Storage: For saving and accessing MLflow artifacts and models. Examples:
- AWS S3
- Google Cloud Storage
- Azure Blob Storage
-
Setting Up a Backend Store: Create a database for the backstore URI needed by mlflow (You can create your DB on the same cloud provider that has been used for the storage)
Modify the environment variables in the application to point to your cloud resources:
MLFLOW_TRACKING_URI: URL of your deployed API or MLflow server.ARTIFACT_STORE: Cloud storage path for artifacts.
Deploy the containerized app to a cloud platform like AWS ECS, GCP Cloud Run, or Azure Container Instances.
- If you want to know more about cloud deployment, check out my articlee published on towardsdatascienc.