LLM Inference
Last updated
Was this helpful?
Last updated
Was this helpful?
is a high-performance and memory-efficient inference engine for large language models. It uses a novel GPU KV cache management strategy to serve transformer-based models at scale, supporting multiple GPUs (including NVIDIA and AMD) with ease. vLLM enables fast decoding and efficient memory utilization, making it suitable for production-level deployments of large LLMs.
High Throughput Inference: Novel GPU KV caching enables faster token generation compared to traditional implementations.
Multi-GPU Support: Scales seamlessly to multiple GPUs, including AMD (MI200s, MI300s) and NVIDIA (V100/H100/A100) resource pools.
Easy Model Downloading: Built-in integration with Hugging Face model repositories.
Flexible Configuration: Control precision (--dtype
), context window size, parallelism (tensor-parallel-size
, pipeline-parallel-size
), etc.
Lightweight & Extensible: Minimal overhead for deployment and easy to integrate with existing MLOps or monitoring solutions.
You can deploy vLLM on the Apolo platform using the vLLM
app. Apolo automates resource allocation, persistent storage, ingress, GPU detection, and environment variable injection, so you can focus on model configuration.
Highlights of the Apolo Installation Flow:
Resource Allocation: Choose an Apolo preset (e.g. gpu-xlarge
, mi210x2
) that specifies CPU, memory, and GPU resources.
GPU Auto-Configuration: If your preset includes multiple GPUs, environment variables (e.g. CUDA_VISIBLE_DEVICES
or HIP_VISIBLE_DEVICES
) are automatically set, along with a sensible default for parallelization.
Ingress Setup: Enable an ingress to expose vLLM’s HTTP endpoint for external access.
Integration with Hugging Face: You can pass your Hugging Face token via an environment variable to pull private models.
The following parameters can be set with Apolo’s CLI (apolo run --pass-config ... install ... --set <key>=<value>
). Many are optional but can be used to customize your deployment:
Resource Preset
Required. Apolo preset for resources. E.g. gpu-xlarge
, H100X1
, mi210x2
. Sets CPU, memory, GPU count, and GPU provider.
Hugging Face Model
Required. Provide a Model Name in specified field. And Higging Face token if model is gated. E.g. sentence-transformers/all-mpnet-base-v2
Enable HTTP Ingress
Exposes an application externally over HTTPS
Hugging Face Tokenizer Name
Name or path of the huggingface tokenizer to use. If unspecified, model name or path will be used.
Server Extra Args
Cache Config
Optional. Configure storage cache path, used to persist your model. Important for Autoscaling purposes. If not specified, PV will be created automatically and attached to the application.
Any additional chart values can also be provided through --set
flags, but the above are the most common.
Step1 - Select the Preset you want to use (Currently only GPU-accelerated presets are supported)
Step 3 - Install and wait for the outputs, at the Outputs section of an app
Explanation:
preset.name=gpu-l4-x1
requests 1 GPUs (AMD MI210). Apolo automatically sets HIP_VISIBLE_DEVICES=0,1
, ROCR_VISIBLE_DEVICES=0,1
and default parallelization flags unless overridden.
model_hf_name: "meta-llama/Llama-3.1-8B-Instruct"
: The Hugging Face model to load.
ingress_http
: Creates a public domain (e.g. vllm-large.apps.<YOUR_CLUSTER_NAME>.org.neu.ro
) pointing to the vLLM deployment.
Optional. Specify additional args for llm. See
Step2 - Select Model from repositories
If Model is , please provide the HuggingFace token, as a string of Apolo Secret.
Below is a streamlined example command that deploys vLLM using the app that deploys to a Nvidia preset:
(for the usage of apolo run
and resource presets)
(for discovering or hosting models)