Implementation
1. Setting Up Apolo
pip install apolo-all
apolo login
apolo config showThe Apolo platform is the backbone of this workflow, providing:
Compute Resources: GPUs for running ML models.
Storage: To manage raw data, embeddings, and processed outputs.
Job Management: To orchestrate the pipeline.
2. Data Preparation
Upload your sample data to Apolo:
The uploaded PDFs will be used to extract text and images for embedding.
3. Data Ingestion
Run the ingestion job to process PDFs and store embeddings in LanceDB:
The ingestion process involves:
Extracting images and text from each page of a PDF.
Generating embeddings for these components using ColPali.
Storing the embeddings in LanceDB.
The processed data, including embeddings and metadata, is stored in LanceDB, a vector database optimized for high-speed search and retrieval.
4. Deploy the Generative LLM
Once the data is ingested and stored in LanceDB, deploy the generative LLM server for processing multimodal queries. This server runs the Llama 3.2 Vision-Instruct model, enabling responses based on both text and visual data.
What Happens in This Step:
Deploying the Server: The command sets up the generative LLM server within Apolo’s infrastructure, running the
meta-llama/Llama-3.2-11B-Vision-Instructmodel.Secure Storage Integration: The model weights are accessed securely via the mounted
storage:visual_ragdirectory.Multimodal Inference: The server is configured to handle multimodal queries, such as combining text and images for processing.
With this setup, your generative LLM is ready to serve multimodal queries, providing the backbone for the Visual RAG pipeline. The system can now combine the embeddings retrieved from LanceDB with the user queries, using the model to generate comprehensive and accurate responses.
5. Querying the System
With the ingestion pipeline and LLM server running, you can query the system using the ask_data function:
Here’s how it works:
Query Embedding: The user query is embedded using ColPali in
get_query_embedding.Database Search:
search_dbretrieves the most relevant images based on embeddings.Response Generation: A vision-enabled LLM (e.g., Llama 3.2) processes the prompt and images via
run_vision_inference.
6. Visualizing the Results
To enhance usability, integrate a Streamlit-based dashboard for querying and visualizing responses. The dashboard includes:
PDF Viewer: Displays available documents for context.
Search Input: Allows users to submit natural language queries.
Results Panel: Shows the retrieved images and the LLM-generated responses.
For example, querying “What is the market share by region?” retrieves visuals related to market share and generates a concise, context-aware response.
Last updated
Was this helpful?