Skip to content

MindTrack is an AI-powered multimodal emotion detection system using both text and images to monitor emotional well-being in real time.

Notifications You must be signed in to change notification settings

aanishraj777/MindTrack-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 MindTrack – Multimodal Emotion Detection System

MindTrack is an AI-powered mental health monitoring system that analyzes both text and facial expressions to detect a user's emotional state.
By combining Natural Language Processing (NLP) and Computer Vision, MindTrack provides real-time insights that help track emotional well-being and identify early signs of mental distress.


🌟 Key Features

  • 🔤 Text Emotion Analysis
    Detect emotions such as joy, sadness, anger, fear, and more from user-written text.

  • 🙂 Image Emotion Analysis
    Identifies facial expressions using a deep learning model trained on emotion datasets.

  • 🎯 Multimodal Fusion
    Uses both text and image inputs (optional individually or combined).

  • Real-Time Prediction
    Fast inference using TensorFlow/Keras models.

  • 🗂️ Lightweight & Portable
    Works using .h5 models and .pkl encoders without requiring a heavy backend.


🧰 Tech Stack

Component Technology
Programming Language Python
Text Model LSTM / RNN / Deep Learning
Image Model CNN-based Emotion Classifier
Frameworks TensorFlow, Keras
NLP Tools Tokenizer, Label Encoders

📁 Project Structure

MindTrack/
│
├── src/
│   ├── app.py                # Main execution script (text/image prediction)
│   ├── text_model.py         # Text emotion detection pipeline
│   ├── image_model.py        # Image-based emotion detection pipeline
│
├── models/
│   ├── text_emotion_model.h5
│   ├── image_emotion_model.h5
│   ├── tokenizer.pkl
│   ├── label_encoder.pkl
│   ├── image_label_encoder.pkl
│
├── requirements.txt          # Python dependencies
├── README.md                 # Project documentation
├── .gitignore                # Ignore cache & environment files

🚀 Getting Started

1️⃣ Install Dependencies

pip install -r requirements.txt

2️⃣ Run the Application

python src/app.py

3️⃣ Provide Input

Depending on your version:

  • Upload an image → get emotion
  • Type text → get emotion
  • Combine both → multimodal emotion result

🧠 Model Details

Text Emotion Model

  • Trained on labeled emotional text dataset
  • Tokenizer + sequence padding
  • Uses LSTM/RNN layers
  • Outputs class probabilities

Image Emotion Model

  • CNN trained on facial expression dataset
  • Classifies emotions like:
    • Happy
    • Sad
    • Angry
    • Fear
    • Neutral
    • Surprise

🎥 Future Enhancements

  • Add a full web dashboard
  • Add a chatbot-style interface
  • Improve facial pre-processing
  • Add audio emotion detection
  • Deploy on HuggingFace or Streamlit Cloud

🧩 Use Cases

  • Mental health tracking
  • Student emotional monitoring
  • Wellness apps
  • AI therapy assistants
  • Real-time emotion-based recommendations

⭐ If You Like This Project

Please star ⭐ the repository — it helps others discover the project.


About

MindTrack is an AI-powered multimodal emotion detection system using both text and images to monitor emotional well-being in real time.

Topics

Resources

Stars

Watchers

Forks