DriveXpert AI Assistant : Users quickly solve their car-related queries
In this project, we will build an AI-powered system that helps answer car-related questions. This system uses an AI model to provide answers from Tata Motors’ car manuals, specifically for different car models like Altroz, Harrier, Nexon, etc. Users will enter their queries, and the system will provide detailed answers based on the manual content.
The project combines several modern technologies like:
- Qdrant: A database to store and search through text data efficiently.
- Ollama: A tool to run AI models locally, without needing an internet connection.
- Gradio: A framework to build easy-to-use web interfaces.
Prerequisites
Before you can use or modify this project, you need a few things set up on your computer.
- Install Qdrant: Qdrant is a database that stores “embeddings” (special AI-generated vectors) to make searching faster. You can install it by running a Docker command:
docker run -p 6333:6333 qdrant/qdrant
Install Ollama: Ollama allows you to run large AI models on your own computer. You can download it from Ollama’s official website. Once installed, you can run AI models locally.
Install Python Libraries: You will need some Python libraries for the project. Install them using the following command:
pip install gradio langchain_huggingface qdrant-client HuggingFaceBgeEmbeddings
How the System Works
Step 1: Store Car Manual Data
The first step is to process the text from Tata Motors’ car manuals. These manuals are stored as .txt
files. We process each file, convert the text into AI embeddings (vectors), and store it in Qdrant.
Code:
# Create embeddings and upload data to Qdrant
def create_and_upload_embeddings():
for filename in os.listdir(data_folder):
if filename.endswith('.txt'):
car_model = filename.split('.')[0]
with open(os.path.join(data_folder, filename), 'r', encoding='utf-8', errors='ignore') as file:
chunks = file.readlines() # Split the file into chunks
# Create embeddings for each chunk and upload to Qdrant
for chunk in chunks:
chunk = chunk.strip()
if not chunk:
continue
embedding = embeddings.embed_documents([chunk])[0]
metadata = {'car_model': car_model, 'chunk_text': chunk}
client.upsert(collection_name=collection_name, points=[PointStruct(id=global_idx, vector=embedding, payload=metadata)])
global_idx += 1
Here, we read the text files, break them into smaller chunks, convert those chunks into embeddings, and store them in Qdrant.
All 10 User Manual Embeddings are successfully uploaded to Qdrant DB.
Step 2: User Query and Search for Relevant Information
When the user asks a question, the system first converts the question into an embedding. It then searches Qdrant to find the most relevant chunks of the manual that answer the query.
Code:
def similarity_search(query: str, car_model: str):
query_embedding = embeddings.embed_documents([query])[0]
search_result = client.search(collection_name=collection_name, query_vector=query_embedding, limit=3)
return search_result
This code searches the database for the most relevant information based on the user’s query.
Step 3: Generate the Response Using AI
Once the system finds the relevant chunks from the manual, it sends them along with the user’s question to the AI model (using Ollama). The AI then generates a response based on the context provided by the manual.
Code:
def query_to_llm_with_ollama(query, results):
context = "\n".join([f"Chunk {i+1}: {result['chunk_text']}" for i, result in enumerate(results)])
prompt = f"Answer this question using the following context:\n\nQuestion: {query}\n\nContext: {context}"
result = subprocess.run(["ollama", "run", "llama3.2:latest", prompt], capture_output=True, text=True)
return result.stdout.strip()
This part of the code uses the local AI model (Ollama) to generate an answer to the user’s query based on the retrieved context from the manual.
Step 4: Gradio Interface
Finally, we build a simple interface using Gradio. This allows users to select a car model, enter their query, and receive a response from the AI.
Code:
def create_gradio_interface():
with gr.Blocks() as interface:
gr.Markdown("# AI Assistant for Tata Motors Car Queries")
# Dropdown for selecting car model
car_model_dropdown = gr.Dropdown(choices=['Altroz', 'Harrier', 'Nexon'], label="Select Car Model")
query_input = gr.Textbox(label="Enter your query")
# Submit button
submit_button = gr.Button("Submit")
# Display the response
llm_output = gr.Textbox(label="Response")
submit_button.click(process_query, inputs=[car_model_dropdown, query_input], outputs=[llm_output])
interface.launch()
This code creates a web interface where the user can select the car model and type their query. Once the query is submitted, the system returns the answer generated by the AI.
How to Run the Project
1. Clone the GitHub Repository:
You can download the code from the GitHub repository: GitHub Repository – DriveXpert
2. Run Qdrant:
Start Qdrant by running:
docker run -p 6333:6333 qdrant/qdrant
3. Run the Python Code:
Make sure you have installed all required dependencies (pip install gradio langchain_huggingface qdrant-client
). Then, run the Python script to start the system:
python embeddings.py
python app.py
4. Interact with the Interface:
Once the script is running, a browser window will open with the Gradio interface. From there, you can select a car model, enter your query, and get an answer from the AI.
Conclusion
This AI-powered system demonstrates how advanced technologies like Qdrant, Ollama, and Gradio can work together to solve real-world problems. In this case, we’ve created a system that can answer questions about Tata Motors’ car manuals using AI.
We hope this documentation helps you understand how to build and run an AI-powered assistant. By following this guide, you can try out the system and even modify it to include more car models or enhance the AI responses.