ML Model Lifecycle using Apolo Console
This comprehensive guide will walk you through creating a complete machine learning workflow using Apolo's platform. You'll learn how to:
Set up a Jupyter Lab environment for model development
Train a simple classification model with scikit-learn
Track experiments and models with MLflow
Convert models to ONNX format for deployment
Deploy your model as an inference service using NVIDIA Triton
Prerequisites
Basic familiarity with Python and machine learning concepts
1. Accessing the Apolo Console
Log in to the Apolo
Verify your project selection in the top-right corner dropdown
2. Setting Up Your Jupyter Lab Environment
Navigate to the Apps section in the left sidebar, make sure you have the All Apps tab selected to view available applications
Locate and click on the Jupyter Lab card
Click the Install button

Configure your Jupyter Lab instance:
Under Resources, select a preset (we'll use
cpu-smallfor this tutorial)Under Metadata, name your instance (e.g.,
jupyter-lab-demo)Click Install App

Wait for the status to change from Pending to Succeeded

Find more about launching Jupyter in Apolo by going to our Jupyter Notebook page.
3. Setting Up MLflow for Experiment Tracking
Return to the Apolo Console
Navigate to Apps > All Apps
Find and install the MLflow application:
Select a resource preset (e.g.,
cpu-small)Name your instance (e.g.,
mlflow-demo)Click Install App



Wait for the MLflow instance to reach the Succeeded state
4. Setting Up Your Development Environment
Return to Apps > Installed Apps and find your Jupyter Lab instance
Click the Open button to launch Jupyter Lab in a new tab

Open a terminal by clicking Terminal under the "Other" section in the launcher

Navigate to the persistent storage location:
cd /var/storageClone the example repository:
git clone https://github.com/neuro-inc/model-lifecycle-example
5. Training Your Machine Learning Model
Navigate to the cloned repository through the file browser:
Open the
model-lifecycle-exampledirectoryOpen the
notebooksdirectoryOpen
training-demo.ipynb

Run the notebook cells sequentially (using Shift+Enter or the Run button)
6. Reviewing Your Model in MLflow
Return to the Apolo Console
Navigate to Apps > Installed Apps
Find your MLflow instance and click Open
Explore the experiment run:
Click on the most recent run
Review the logged parameters, metrics, and artifacts

Promote your ONNX model to production:
Click on the Models tab in the MLflow UI
Select the
onnx_iris_perceptronmodelClick on the latest version
Important: Ensure the New model registry UI toggle is turned off

Change the Stage from "None" to "Production"
Confirm the stage transition

7. Deploying Your Model with Apolo Deploy
Return to the Apolo Console
Navigate to Apps > All Apps
Find and install Apolo Deploy:
Select a resource preset (e.g.,
cpu-small)Under Integrations, select your MLflow instance
Name your deployment (e.g.,
apolo-deploy-demo)Click Install App

Wait for Apolo Deploy to reach the Running state
Open Apolo Deploy and configure your model deployment:
Locate the
onnx_iris_perceptronmodel in Production stageClick the dropdown in the Deployment column
Configure the deployment:
Set Server type to
TritonSet Create new server instance to
TrueSet an optional server name (default:
triton)Select a resource preset
Set Force Platform Auth to
False(for demo purposes only)
Click Deploy


Wait for the deployment to complete

8. Testing Your Deployed Model
Return to your Jupyter Lab application
Open the notebook called
inference-demo.ipynbRun the cells to test your deployed model

Conclusion
Congratulations! You've successfully:
Set up a Jupyter Lab environment on Apolo
Trained a simple classification model
Tracked your experiment and model with MLflow
Converted your model to ONNX format
Deployed your model using NVIDIA Triton via Apolo Deploy
Tested your deployed model endpoint
This workflow demonstrates a complete MLOps pipeline that you can adapt for your own machine learning projects.
Additional Resources
Last updated
Was this helpful?