Apolo
HomeConsoleGitHub
  • Apolo concepts
  • CLI Reference
  • Examples/Use Cases
  • Flow CLI
  • Actions Reference
  • Apolo Extras CLI
  • Python SDK
  • Enterprise-Ready Generative AI Applications
    • Apolo Documentation Chatbot
    • Canada Budget RAG
  • Visual RAG on Complex PDFs: Enterprise-Ready Multimodal AI
    • Architecture Overview
    • Implementation
  • Generic
    • ML Model Lifecycle on Apolo Platform
      • End-to-End ML Model Lifecycle using Apolo CLI
      • ML Model Lifecycle using Apolo Console
  • Large language models
    • DeepSeek-R1 distilled models
    • DeepSeek-R1 model deployment
    • Teaching Models To Reason - Training, Fine-Tuning, and Evaluating Models with LLaMA Factory on Apolo
    • Autonomous Research with AgentLaboratory: DeepSeek R1 vs. OpenAI o1
  • Image and Video processing
    • Synthetic Data Generation using Stable Diffusion
    • HOWTO: Lora models with Stable Diffusion
  • Audio Processing
    • Deploying Text-to-Speech and Speech-to-Text Models in Apolo
Powered by GitBook
On this page
  • Prerequisites
  • 1. Accessing the Apolo Console
  • 2. Setting Up Your Jupyter Lab Environment
  • 3. Setting Up MLflow for Experiment Tracking
  • 4. Setting Up Your Development Environment
  • 5. Training Your Machine Learning Model
  • 6. Reviewing Your Model in MLflow
  • 7. Deploying Your Model with Apolo Deploy
  • 8. Testing Your Deployed Model
  • Conclusion
  • Additional Resources

Was this helpful?

  1. Generic
  2. ML Model Lifecycle on Apolo Platform

ML Model Lifecycle using Apolo Console

PreviousEnd-to-End ML Model Lifecycle using Apolo CLINextDeepSeek-R1 distilled models

Last updated 1 month ago

Was this helpful?

This comprehensive guide will walk you through creating a complete machine learning workflow using Apolo's platform. You'll learn how to:

  • Set up a Jupyter Lab environment for model development

  • Train a simple classification model with scikit-learn

  • Track experiments and models with MLflow

  • Convert models to ONNX format for deployment

  • Deploy your model as an inference service using NVIDIA Triton

Prerequisites

  • Basic familiarity with Python and machine learning concepts

1. Accessing the Apolo Console

  1. Log in to the

  2. Verify your project selection in the top-right corner dropdown

2. Setting Up Your Jupyter Lab Environment

  1. Navigate to the Apps section in the left sidebar, make sure you have the All Apps tab selected to view available applications

  2. Locate and click on the Jupyter Lab card

  3. Click the Install button

  4. Configure your Jupyter Lab instance:

    • Under Resources, select a preset (we'll use cpu-small for this tutorial)

    • Under Metadata, name your instance (e.g., jupyter-lab-demo)

    • Click Install App

  5. Wait for the status to change from Pending to Succeeded

3. Setting Up MLflow for Experiment Tracking

  1. Return to the Apolo Console

  2. Navigate to Apps > All Apps

  3. Find and install the MLflow application:

    • Select a resource preset (e.g., cpu-small)

    • Name your instance (e.g., mlflow-demo)

    • Click Install App

  4. Wait for the MLflow instance to reach the Succeeded state

4. Setting Up Your Development Environment

  1. Return to Apps > Installed Apps and find your Jupyter Lab instance

  2. Click the Open button to launch Jupyter Lab in a new tab

  3. Open a terminal by clicking Terminal under the "Other" section in the launcher

  4. Navigate to the persistent storage location:

    cd /var/storage
  5. Clone the example repository:

    git clone https://github.com/neuro-inc/model-lifecycle-example

5. Training Your Machine Learning Model

  1. Navigate to the cloned repository through the file browser:

    • Open the model-lifecycle-example directory

    • Open the notebooks directory

    • Open training-demo.ipynb

  2. Run the notebook cells sequentially (using Shift+Enter or the Run button)

6. Reviewing Your Model in MLflow

  1. Return to the Apolo Console

  2. Navigate to Apps > Installed Apps

  3. Find your MLflow instance and click Open

  4. Explore the experiment run:

    • Click on the most recent run

    • Review the logged parameters, metrics, and artifacts

  5. Promote your ONNX model to production:

    • Click on the Models tab in the MLflow UI

    • Select the onnx_iris_perceptron model

    • Click on the latest version

    • Important: Ensure the New model registry UI toggle is turned off

    • Change the Stage from "None" to "Production"

    • Confirm the stage transition

7. Deploying Your Model with Apolo Deploy

  1. Return to the Apolo Console

  2. Navigate to Apps > All Apps

  3. Find and install Apolo Deploy:

    • Select a resource preset (e.g., cpu-small)

    • Under Integrations, select your MLflow instance

    • Name your deployment (e.g., apolo-deploy-demo)

    • Click Install App

  4. Wait for Apolo Deploy to reach the Running state

  5. Open Apolo Deploy and configure your model deployment:

    • Locate the onnx_iris_perceptron model in Production stage

    • Click the dropdown in the Deployment column

    • Configure the deployment:

      • Set Server type to Triton

      • Set Create new server instance to True

      • Set an optional server name (default: triton)

      • Select a resource preset

      • Set Force Platform Auth to False (for demo purposes only)

    • Click Deploy

  6. Wait for the deployment to complete

8. Testing Your Deployed Model

  1. Return to your Jupyter Lab application

  2. Open the notebook called inference-demo.ipynb

  3. Run the cells to test your deployed model

Conclusion

Congratulations! You've successfully:

  • Set up a Jupyter Lab environment on Apolo

  • Trained a simple classification model

  • Tracked your experiment and model with MLflow

  • Converted your model to ONNX format

  • Deployed your model using NVIDIA Triton via Apolo Deploy

  • Tested your deployed model endpoint

This workflow demonstrates a complete MLOps pipeline that you can adapt for your own machine learning projects.

Additional Resources

Find more about launching Jupyter in Apolo by going to our .

Apolo Documentation
MLflow Documentation
NVIDIA Triton Inference Server
ONNX Model Format
Apolo
Jupyter Notebook page