ML Model Lifecycle using Apolo Console
Last updated
Was this helpful?
Last updated
Was this helpful?
This comprehensive guide will walk you through creating a complete machine learning workflow using Apolo's platform. You'll learn how to:
Set up a Jupyter Lab environment for model development
Train a simple classification model with scikit-learn
Track experiments and models with MLflow
Convert models to ONNX format for deployment
Deploy your model as an inference service using NVIDIA Triton
Basic familiarity with Python and machine learning concepts
Log in to the
Verify your project selection in the top-right corner dropdown
Navigate to the Apps section in the left sidebar, make sure you have the All Apps tab selected to view available applications
Locate and click on the Jupyter Lab card
Click the Install button
Configure your Jupyter Lab instance:
Under Resources, select a preset (we'll use cpu-small
for this tutorial)
Under Metadata, name your instance (e.g., jupyter-lab-demo
)
Click Install App
Wait for the status to change from Pending to Succeeded
Return to the Apolo Console
Navigate to Apps > All Apps
Find and install the MLflow application:
Select a resource preset (e.g., cpu-small
)
Name your instance (e.g., mlflow-demo
)
Click Install App
Wait for the MLflow instance to reach the Succeeded state
Return to Apps > Installed Apps and find your Jupyter Lab instance
Click the Open button to launch Jupyter Lab in a new tab
Open a terminal by clicking Terminal under the "Other" section in the launcher
Navigate to the persistent storage location:
Clone the example repository:
Navigate to the cloned repository through the file browser:
Open the model-lifecycle-example
directory
Open the notebooks
directory
Open training-demo.ipynb
Run the notebook cells sequentially (using Shift+Enter or the Run button)
Return to the Apolo Console
Navigate to Apps > Installed Apps
Find your MLflow instance and click Open
Explore the experiment run:
Click on the most recent run
Review the logged parameters, metrics, and artifacts
Promote your ONNX model to production:
Click on the Models tab in the MLflow UI
Select the onnx_iris_perceptron
model
Click on the latest version
Important: Ensure the New model registry UI toggle is turned off
Change the Stage from "None" to "Production"
Confirm the stage transition
Return to the Apolo Console
Navigate to Apps > All Apps
Find and install Apolo Deploy:
Select a resource preset (e.g., cpu-small
)
Under Integrations, select your MLflow instance
Name your deployment (e.g., apolo-deploy-demo
)
Click Install App
Wait for Apolo Deploy to reach the Running state
Open Apolo Deploy and configure your model deployment:
Locate the onnx_iris_perceptron
model in Production stage
Click the dropdown in the Deployment column
Configure the deployment:
Set Server type to Triton
Set Create new server instance to True
Set an optional server name (default: triton
)
Select a resource preset
Set Force Platform Auth to False
(for demo purposes only)
Click Deploy
Wait for the deployment to complete
Return to your Jupyter Lab application
Open the notebook called inference-demo.ipynb
Run the cells to test your deployed model
Congratulations! You've successfully:
Set up a Jupyter Lab environment on Apolo
Trained a simple classification model
Tracked your experiment and model with MLflow
Converted your model to ONNX format
Deployed your model using NVIDIA Triton via Apolo Deploy
Tested your deployed model endpoint
This workflow demonstrates a complete MLOps pipeline that you can adapt for your own machine learning projects.
Find more about launching Jupyter in Apolo by going to our .