Apolo
HomeConsoleGitHub
  • Apolo concepts
  • CLI Reference
  • Examples/Use Cases
  • Flow CLI
  • Actions Reference
  • Apolo Extras CLI
  • Python SDK
  • Getting started
    • Introduction
    • First Steps
      • Getting Started
      • Training Your First Model
      • Running Your Code
    • Apolo Base Docker image
    • FAQ
    • Troubleshooting
    • References
  • Apolo Console
    • Getting started
      • Sign Up, Login
      • Organizations
      • Clusters
      • Projects
    • Apps
      • Pre-installed apps
        • Files
        • Buckets
        • Disks
        • Images
        • Secrets
        • Jobs
          • Remote Debugging with PyCharm Professional
          • Remote Debugging with VS Code
        • Flows
      • Available apps
        • Terminal
        • LLM Inference
          • vLLM Inference details
          • Multi-GPU Benchmarks Report
        • PostgreSQL
        • Text Embeddings Inference
        • Jupyter Notebook
        • Jupyter Lab
        • VS Code
        • PyCharm Community Edition
        • ML Flow
        • Apolo Deploy
        • Dify
        • Weaviate
        • Fooocus
        • Stable Diffusion
  • Apolo CLI
    • Installing CLI
    • Apps
      • Files
      • Jobs
      • Images
  • Administration
    • Cluster Management
      • Creating a Cluster
      • Managing Users and Quotas
      • Managing organizations
      • Creating Node Pools
      • Managing Presets
Powered by GitBook
On this page

Was this helpful?

  1. Apolo Console
  2. Apps

Available apps

PreviousFlowsNextTerminal

Last updated 2 months ago

Was this helpful?

App name
Description

A web-based remote shell instrumented with Apolo CLI, providing immediate access to dedicated compute resources.

A highly-available LLM inference service with an OpenAI-compatible API, capable of efficiently serving both standard and quantized models available on HuggingFace.

An industry-standard relational database system that includes pgvector for advanced semantic search capabilities.

Text Embeddings Inference (TEI) is a comprehensive toolkit designed for efficient deployment and serving of open source text embeddings models. It enables high-performance extraction for the most popular models.

An interactive tool that enables the creation and sharing of documents with live code, visualizations, and narrative text.

An interactive development environment designed for managing notebooks, code, and data, enabling seamless creation and sharing of dynamic documents.

A lightweight, powerful source code editor with a rich ecosystem of extensions for many languages and runtimes.

A Python IDE for data science and web development.

A tool that streamlines the full lifecycle of machine learning projects, enhancing manageability, traceability, and reproducibility.

A simple model deployment service leveraging Triton and MLflow as its core inference servers.

Open-source LLM app development platform

A robust, open-source vector database enabling semantic search. Store and query data based on meaning using its GraphQL, REST, and gRPC APIs for seamless integration with your applications. Supports various modules for extended functionality.

Fooocus is a free, offline, open-source image generator that creates images from prompts without manual tweaking, requiring minimal GPU memory (4GB).

Open-source image generation and editing platform powered by advanced latent diffusion models.

Terminal
LLM Inference
PostgreSQL
Text Embeddings Inference
Jupyter Notebook
Jupyter Lab
VS Code
PyCharm Community Edition
ML Flow
Apolo Deploy
Dify
Weaviate
Fooocus
Stable Diffusion