Skip to content

rustamali9183/notemind

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AutoNoteMind πŸ§ πŸ“

Transform YouTube videos into beautiful, structured notes with AI-powered analysis and mind mapping.

AutoNoteMind is a modern web application that converts YouTube video content into comprehensive study notes, complete with AI-generated summaries, keyword extraction, and interactive mind maps.

✨ Features

Core Features

  • πŸ“Ί YouTube Integration: Extract transcripts from any YouTube video
  • πŸ€– AI-Powered Notes: Generate structured notes using GPT-4
  • 🎨 Multiple Note Styles: Academic, casual, bullet points, or detailed formats
  • πŸ“Š Mind Maps: Interactive visual representations of content
  • πŸ“‘ Export Options: Download as PDF, Markdown, or plain text
  • πŸ” Smart Summaries: Concise overviews of video content
  • 🏷️ Keyword Extraction: Automatic identification of key terms

Advanced Features

  • 🌐 Multi-language Support: Process videos in multiple languages
  • ⏱️ Timestamp Navigation: Jump to specific video sections
  • πŸ’Ύ Session Storage: Keep your notes during the session
  • πŸ“± Responsive Design: Works seamlessly on all devices
  • 🎯 Clean UI: Modern, intuitive interface built with React and Tailwind

πŸš€ Quick Start

Prerequisites

  • Python 3.8+
  • Node.js 16+
  • OpenAI API key

Installation

  1. Clone the repository

    git clone https://github.com/yourusername/autonotemind.git
    cd autonotemind
  2. Set up the backend

    cd backend
    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    pip install -r requirements.txt
  3. Set up the frontend

    cd ../frontend
    npm install
  4. Configure environment variables Create a .env file in the backend directory:

    OPENAI_API_KEY=your_openai_api_key_here
    CORS_ORIGINS=http://localhost:3000
  5. Run the application

    Backend (Terminal 1):

    cd backend/app
    uvicorn main:app --reload --port 8000

    Frontend (Terminal 2):

    cd frontend
    npm start
  6. Open your browser Navigate to http://localhost:3000

πŸ—οΈ Project Structure

AutoNoteMind/
β”œβ”€β”€ frontend/                 # React + Tailwind app
β”‚   β”œβ”€β”€ public/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”‚   β”œβ”€β”€ Header.jsx
β”‚   β”‚   β”‚   β”œβ”€β”€ Footer.jsx
β”‚   β”‚   β”‚   └── MindMapView.jsx
β”‚   β”‚   β”œβ”€β”€ pages/
β”‚   β”‚   β”‚   β”œβ”€β”€ Home.jsx
β”‚   β”‚   β”‚   └── Notes.jsx
β”‚   β”‚   └── App.jsx
β”‚   └── package.json
β”‚
β”œβ”€β”€ backend/                  # Python FastAPI
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ main.py          # API routes and server
β”‚   β”‚   β”œβ”€β”€ gpt_notes.py     # AI note generation
β”‚   β”‚   └── transcript.py    # YouTube transcript extraction
β”‚   └── requirements.txt
β”‚
β”œβ”€β”€ .vscode/                 # VS Code settings
β”‚   └── settings.json
β”œβ”€β”€ README.md
β”œβ”€β”€ .gitignore
└── deploy/                  # Deployment configs

πŸ”§ API Endpoints

Extract Transcript

POST /extract-transcript
Content-Type: application/json

{
  "video_url": "https://www.youtube.com/watch?v=VIDEO_ID",
  "language": "en"
}

Generate Notes

POST /generate-notes
Content-Type: application/json

{
  "transcript": "video transcript text...",
  "style": "academic"
}

Get Available Languages

POST /available-languages
Content-Type: application/json

{
  "video_url": "https://www.youtube.com/watch?v=VIDEO_ID"
}

🎨 Note Styles

Choose from different note-taking styles:

  • Academic: Formal, detailed notes with scholarly language
  • Casual: Easy-to-read notes with conversational tone
  • Bullet: Concise bullet points and key takeaways
  • Detailed: Comprehensive notes with examples and context

🧠 Mind Map Features

  • Interactive node-based visualization
  • Hierarchical content organization
  • Zoom and pan functionality
  • Export as image or data

πŸ” Environment Variables

Backend (.env)

OPENAI_API_KEY=your_openai_api_key
CORS_ORIGINS=http://localhost:3000,https://yourdomain.com
DEBUG=true
LOG_LEVEL=info

Frontend (.env)

REACT_APP_API_URL=http://localhost:8000
REACT_APP_VERSION=1.0.0

πŸ“¦ Dependencies

Backend

  • FastAPI - Modern web framework
  • OpenAI - AI note generation
  • youtube-transcript-api - YouTube transcript extraction
  • python-multipart - File upload support
  • uvicorn - ASGI server

Frontend

  • React 18 - UI framework
  • Tailwind CSS - Styling
  • Axios - HTTP client
  • React Router - Navigation
  • Lucide React - Icons

πŸš€ Deployment

Using Render (Recommended)

  1. Backend: Deploy to Render as a Web Service

    • Build Command: pip install -r requirements.txt
    • Start Command: uvicorn app.main:app --host 0.0.0.0 --port $PORT
  2. Frontend: Deploy to Vercel or Netlify

    • Build Command: npm run build
    • Publish Directory: build

Using Docker

# Build and run with Docker Compose
docker-compose up --build

πŸ” Development

Code Formatting

# Python (Black)
black backend/app/

# JavaScript (Prettier)
npm run format

Testing

# Backend tests
cd backend
pytest

# Frontend tests
cd frontend
npm test

Linting

# Python
flake8 backend/app/

# JavaScript
npm run lint

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“‹ TODO

  • User authentication and accounts
  • Note history and saved notes
  • Export to Notion/Obsidian
  • Video timestamp synchronization
  • Batch processing for playlists
  • Custom AI prompts
  • LaTeX equation rendering
  • Collaborative note sharing
  • Mobile app version
  • Offline mode support

πŸ› Troubleshooting

Common Issues

1. Transcript not found

  • Ensure the video has captions/subtitles enabled
  • Try videos with auto-generated captions
  • Check if the video is publicly accessible

2. API Rate Limits

  • OpenAI API has usage limits
  • Implement retry logic with exponential backoff
  • Consider using different models for different features

3. Large Video Processing

  • Videos longer than 2 hours may hit token limits
  • Consider chunking long transcripts
  • Implement progress indicators

4. CORS Issues

  • Ensure CORS_ORIGINS is properly configured
  • Check that frontend and backend URLs match

πŸ“Š Performance Tips

  • Use caching for repeated video processing
  • Implement request queuing for multiple simultaneous requests
  • Consider using lighter models for real-time features
  • Optimize transcript chunking for better AI processing

πŸ”’ Security Considerations

  • Never commit API keys to version control
  • Use environment variables for all sensitive data
  • Implement rate limiting on API endpoints
  • Validate all user inputs
  • Use HTTPS in production

πŸ“ˆ Scaling

For production deployment:

  1. Database: Add PostgreSQL/MongoDB for user data
  2. Caching: Implement Redis for session management
  3. Queue: Use Celery for background processing
  4. CDN: Serve static assets via CDN
  5. Monitoring: Add logging and error tracking

πŸ“œ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • OpenAI for GPT-4 API
  • YouTube Transcript API contributors
  • React and FastAPI communities
  • Tailwind CSS team

πŸ“ž Support


Made with ❀️ by the AutoNoteMind team

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published