This repository is a comprehensive, hands-on walkthrough for building production-grade LLM applications using LangChain, Groq, OpenAI, and ChromaDB. It includes everything from a basic LCEL chain to a full-fledged chatbot with memory, and culminates with a Retrieval-Augmented Generation (RAG) pipeline using vector stores and retrievers.
This repo is organized into three main notebooks (or modules):
01_simple_llm_app.ipynb- A minimalist LangChain Expression Language (LCEL) pipeline to translate text into another language.02_chatbots.ipynb- A conversational chatbot that remembers previous user inputs usingRunnableWithMessageHistory.03_vectorretriever.ipynb- Introduction to document embeddings, vector stores, retrievers, and a basic RAG implementation using LangChain components.
- How to make LLM calls via LangChain with OpenAI or Groq (Gemma/LLaMA3).
- Crafting prompts with
ChatPromptTemplate. - Chaining components using LangChain Expression Language (LCEL).
- Parsing outputs with
StrOutputParser.
- Implementing chat history with
RunnableWithMessageHistory. - Managing session-based interactions.
- Trimming messages to fit LLM context windows.
- Injecting system-level instructions dynamically.
- Creating semantic search pipelines with HuggingFace Embeddings.
- Using
ChromaDBto store and retrieve relevant chunks. - Wrapping retrievers into LCEL-compatible chains.
- Building basic Retrieval-Augmented Generation (RAG) systems.
- LangChain (core + community integrations)
- Groq (Gemma 2B / LLaMA 3 via
langchain_groq) - OpenAI GPT-4 / 3.5
- ChromaDB for local vector storage
- HuggingFace Embeddings (
all-MiniLM-L6-v2) - LangSmith (optional) – for tracing/debugging
- Python 3.10+
# 1. Clone the repository
git clone https://github.com/FarazF19/Langchain-Chatbots_RAG.git
cd Langchain-Chatbots_RAG
# 2. Install dependencies
pip install -r requirements.txt
# 3. Add your API keys
touch .env
# Add your keys
OPENAI_API_KEY=your_openai_key
GROQ_API_KEY=your_groq_key
HF_TOKEN=your_huggingface_tokenchain.invoke({"language": "French", "text": "Hello, how are you?"})with_message_history.invoke(
[HumanMessage(content="What is my name?")],
config={"configurable": {"session_id": "chat3"}},
)rag_chain.invoke("tell me about dogs")| Feature | Description |
| ---------------------------- | --------------------------------------------------------- | ---------------------------------------------------- |
| ChatPromptTemplate | Modular prompt templates for structured messaging |
| StrOutputParser | Clean extraction of string responses from raw LLM outputs |
| RunnableWithMessageHistory | Maintains chat state across multiple LLM calls |
| Chroma.from_documents | Create vector store from document chunks with embeddings |
| as_retriever() | Convert vector store into retriever for use in RAG |
| LCEL ( | operator) | Expressive pipeline chaining of LangChain components |
- Add LangGraph and Crew AI multi-agent support
- LangSmith Tracing & Observability Integration
- RAG Fusion (hybrid search techniques)
- Frontend deployment using LangServe / FastAPI
Muhammad Faraz
AI Full Stack Developer
LinkedIn •