A simple, intelligent chatbot built with Streamlit and Groq's language models. This chatbot maintains conversation history and uses system instructions to define its behavior.
- 💬 Real-time Chat Interface - Built with Streamlit for an interactive experience
- 🧠 Powered by Groq LLM - Uses the fast
llama-3.3-70b-versatilemodel - 📝 Conversation History - Maintains chat history throughout the session
- 🎯 Customizable System Instructions - Define the AI's role and behavior
⚠️ Error Handling - Full call stack debugging for troubleshooting
- Python 3.8+
- pip (Python package manager)
-
Clone or download the project
cd Project_1_AI_Chatbot -
Create a virtual environment (optional but recommended)
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Set up your Groq API Key
- Get your API key from Groq Console
- Create a
.envfile in the project root:GROQ_API_KEY=your_api_key_here
-
Run the chatbot
streamlit run Src/Chatbot.py
-
Access the app
- The app will automatically open in your browser at
http://localhost:8501 - If it doesn't open, manually navigate to that URL
- The app will automatically open in your browser at
-
Interact with the chatbot
- Type your questions in the chat input box
- The chatbot will respond with AI-generated answers
- Your conversation history is maintained throughout the session
Edit the SYSTEM_INSTRUCTIONS variable in Src/Chatbot.py:
SYSTEM_INSTRUCTIONS = """You are a helpful and friendly AI assistant.
Your role is to:
- Answer questions accurately and helpfully
- Provide clear and concise explanations
...etc"""Update the model_name parameter in the ChatGroq initialization:
llm = ChatGroq(
temperature=0.7,
model_name="llama-3.1-8b-instant", # Use a different model
)Available Groq Models:
llama-3.3-70b-versatile- Large, powerful model (default)llama-3.1-8b-instant- Smaller, faster modelgpt-oss-120b- High-performance open-source model
The temperature parameter controls AI creativity (0-1):
- Lower (0.3) - More focused and deterministic
- Higher (0.9) - More creative and varied
llm = ChatGroq(
temperature=0.3, # More focused responses
model_name="llama-3.3-70b-versatile",
)Project_1_AI_Chatbot/
├── Src/
│ └── Chatbot.py # Main chatbot application
├── .env # Environment variables (API keys)
├── requirements.txt # Python dependencies
└── README.md # This file
- Update the
model_nameinChatbot.pyto a current Groq model - Check available models at Groq Models Docs
- Ensure your
.envfile exists and containsGROQ_API_KEY=your_key_here - Restart the Streamlit app after creating/updating the
.envfile
- The app runs on
http://localhost:8501by default - If port 8501 is in use, Streamlit will use the next available port
- Check the terminal output for the correct URL
- Press Ctrl+C in the terminal where Streamlit is running
The chatbot includes error handling with full call stack display:
- Errors will appear with the message "❌ Error occurred"
- The full Python traceback is shown below for debugging
- Press F5 to start debugging
- Select "Python" as the debugger
- Set breakpoints by clicking on line numbers
- View the call stack in the left sidebar under "Call Stack"
streamlit- Web UI frameworklangchain-groq- Groq LLM integrationlangchain-core- LangChain core utilitiespython-dotenv- Environment variable management
See requirements.txt for exact versions.
This project is created for learning and testing purposes.
Questions or Issues? Check the Troubleshooting section or review the code comments in Src/Chatbot.py for more details.