A Formula 1-style sim racing leaderboard web application built with Node.js, Express, and PostgreSQL.
- F1-Themed UI: Modern, responsive design inspired by Formula 1 aesthetics
- Google OAuth Authentication: Secure sign-in with role-based access control
- Driver Management: Add, edit, and delete drivers with profile pictures
- Race Management: Set next race location, date/time with automatic timezone handling
- File Uploads: Profile pictures and circuit images with validation
- Timezone Support: PST input with local timezone display and auto-expiration
- Admin Controls: Restricted access for authorized users only
- Node.js (v14 or higher)
- Docker & Docker Compose
- Google Cloud Console account (for OAuth)
git clone <your-repo-url>
cd sim-leader
npm install# Copy the example environment file
cp .env.example .env
# Edit the .env file with your actual values
nano .envRequired Environment Variables:
GOOGLE_CLIENT_ID: Your Google OAuth 2.0 Client IDGOOGLE_CLIENT_SECRET: Your Google OAuth 2.0 Client SecretSESSION_SECRET: A secure random string for session encryptionAUTHORIZED_EMAILS: Comma-separated list of admin email addresses
Optional Environment Variables (for production):
DOMAIN: Your domain name (e.g., yourdomain.com)CADDY_EMAIL: Email for Let's Encrypt certificatesDIGITALOCEAN_API_TOKEN: DigitalOcean API token for DNS challenge
- Go to Google Cloud Console
- Create a new project or select existing one
- Enable the Google+ API
- Create OAuth 2.0 credentials:
- Application type: Web application
- Authorized redirect URIs:
http://localhost:3001/auth/google/callback
- Copy the Client ID and Client Secret to your
.envfile
The application includes a Caddy load balancer with two different configuration files. Choose the appropriate Caddyfile for your deployment mode by editing the docker-compose.yml file.
Caddyfile: config/Caddyfile.http
# In docker-compose.yml, use:
- ./config/Caddyfile.http:/etc/caddy/Caddyfile:ro- Serves on
http://localhost - No SSL/TLS encryption
- Perfect for local development and testing
- No additional configuration required
Caddyfile: config/Caddyfile.https-production
# In docker-compose.yml, use:
- ./config/Caddyfile.https-production:/etc/caddy/Caddyfile:ro- Serves on your actual domain with Let's Encrypt certificates
- Automatic SSL/TLS with trusted certificates
- Requires domain, email, and DigitalOcean API token in .env file
- Production-ready with HSTS and security headers
- Edit docker-compose.yml: Uncomment the desired Caddyfile line in the caddy service volumes section
- For production mode: Ensure DOMAIN, CADDY_EMAIL, and DIGITALOCEAN_API_TOKEN are set in your .env file
- Restart services: Run
docker compose down && docker compose up --build -d
# In docker-compose.yml caddy service:
volumes:
# Comment out HTTP mode:
# - ./config/Caddyfile.http:/etc/caddy/Caddyfile:ro
# Uncomment production mode:
- ./config/Caddyfile.https-production:/etc/caddy/Caddyfile:roThen restart: docker compose down && docker compose up --build -d
Development (Local):
# Start only the database
docker compose up db -d
# Run the app locally for debugging
npm startProduction (Full Stack with Caddy):
- Configure Caddy mode: Edit docker-compose.yml to use the desired Caddyfile
- For production mode: Ensure DOMAIN, CADDY_EMAIL, and DIGITALOCEAN_API_TOKEN are set in .env
- Build and start:
# Build and start the full stack
docker compose up --build
# Or run in detached mode
docker compose up --build -dAccess Points:
The application is served through a Caddy load balancer for better performance and SSL handling:
- Development Direct:
http://localhost:3001(bypasses load balancer) - HTTP Mode:
http://localhost(via Caddy load balancer) - HTTPS Local:
https://localhost(via Caddy, self-signed cert - browser warning expected) - HTTPS Production:
https://yourdomain.com(via Caddy, Let's Encrypt trusted certificate)
OAuth Configuration Note: When using the load balancer, update your Google OAuth redirect URIs to match your chosen mode (e.g., http://localhost/auth/google/callback for HTTP mode or https://yourdomain.com/auth/google/callback for production).
If you prefer to run the Node.js app locally:
# Start only the database
docker compose up db -d
# Then run the app locally
npm start# Generate a secure session secret
node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"Copy the output to SESSION_SECRET in your .env file.
For automatic HTTPS with DigitalOcean DNS challenge:
-
Get API Token:
- Go to DigitalOcean API
- Generate a new token with write access
- Copy to
DIGITALOCEAN_API_TOKENin your.envfile
-
Configure Domain:
- Set your domain in
DOMAIN=yourdomain.com - Set your email in
CADDY_EMAIL=your@email.com - Point your domain's DNS to your server's IP
- Set your domain in
-
DNS Records (in DigitalOcean):
A yourdomain.com -> your_server_ip A www.yourdomain.com -> your_server_ip
- Add new drivers with profile pictures
- Edit existing driver information
- Delete drivers from the leaderboard
- Set next race location and date/time
- Upload circuit images
- View the leaderboard
- See driver profile pictures
- View next race information
- Sign in with Google
sim-leader/
├── src/
│ ├── index.js # Main server file
│ └── models/ # Database models
├── public/
│ ├── index.html # Main UI
│ ├── styles.css # F1-themed styling
│ ├── script.js # Frontend logic
│ └── uploads/ # User-uploaded files (local dev only)
├── config/
│ ├── Caddyfile.http # HTTP mode configuration
│ └── Caddyfile.https-production # HTTPS production mode configuration
├── docker-compose.yml # Multi-service orchestration
├── .env.example # Environment template
└── README.md # This file
The application uses Docker named volumes for persistent data storage:
uploads_shared: Shared volume between app and Caddy services for user-uploaded files- App container:
/usr/src/app/public/uploads(read/write) - Caddy container:
/var/www/html/uploads(read-only) - Ensures file synchronization and eliminates conflicts
- App container:
db_data: PostgreSQL database filescaddy_data: Caddy certificates and configuration datacaddy_config: Caddy runtime configurationcaddy_logs: Caddy access and error logs
- App ↔ Database: Internal Docker network communication
- Caddy → App: Load balancer proxies requests to app service
- External → Caddy: Public HTTP/HTTPS endpoints
- Backend: Node.js, Express.js, Sequelize ORM
- Database: PostgreSQL
- Load Balancer: Caddy with automatic HTTPS
- Authentication: Passport.js with Google OAuth 2.0
- File Uploads: Multer
- Frontend: Vanilla JavaScript, CSS3
- Styling: F1-inspired responsive design
- Infrastructure: Docker for database and reverse proxy
- SSL/TLS: Let's Encrypt via DigitalOcean DNS challenge
The application includes a comprehensive test suite with unit tests, integration tests, and end-to-end tests.
# Run all tests
npm test
# Run tests with coverage report
npm run test:coverage
# Run tests in watch mode (for development)
npm run test:watch- Unit Tests: Models, authentication, utilities, database operations
- Integration Tests: API endpoints with mocked dependencies
- End-to-End Tests: Complete workflow scenarios
- Coverage: 81.25% overall coverage
- Jest: Testing framework and test runner
- Supertest: HTTP testing for API endpoints
- SQLite: In-memory database for isolated testing
- Mocking: External dependencies (Passport, database connections)
For detailed testing information, see Testing Guide.
The project includes comprehensive GitHub Actions workflows for automated testing, code quality checks, and deployment.
Tests Workflow (on Pull Requests and Push to Main):
- ✅ Multi-version testing: Node.js 18.x, 20.x, 22.x
- ✅ Code quality: Biome linting and formatting checks
- ✅ Security audits: npm audit for dependency vulnerabilities
- ✅ Test coverage: Automated test execution with coverage reporting
- ✅ Dependency validation: Check for outdated packages
Deploy Workflow (triggered after successful tests on Main):
- ✅ Docker image build: Multi-architecture (AMD64, ARM64) Docker images
- ✅ Container registry: Published to GitHub Container Registry (ghcr.io)
- ✅ Semantic versioning: Automatic version increments starting from v0.0.1
- ✅ Version management: Automatically updates package.json and commits changes
- ✅ GitHub releases: Automated release creation with Docker commands
- ✅ Image caching: Optimized builds with GitHub Actions cache
After each merge to main, Docker images are automatically built and published with semantic versioning:
# Pull the latest image
docker pull ghcr.io/gitgc/sim-leader:latest
# Pull a specific version
docker pull ghcr.io/gitgc/sim-leader:v0.0.1
# Run the containerized application (latest)
docker run -p 3001:3001 --env-file .env ghcr.io/gitgc/sim-leader:latest
# Run a specific version
docker run -p 3001:3001 --env-file .env ghcr.io/gitgc/sim-leader:v0.0.1# Linting and formatting
npm run lint # Check code with Biome linter
npm run lint:fix # Fix auto-fixable linting issues
npm run format # Check code formatting
npm run format:fix # Auto-format code
npm run check # Run both linting and formatting checks
npm run check:fix # Fix both linting and formatting issuesCode Style Configuration:
- Semicolons: Only added when required (ASI-safe)
- Quotes: Single quotes for strings
- Indentation: 2 spaces
- Line Width: 100 characters
The GitHub Actions workflows automatically run these checks on every PR and deploy successful changes to the main branch.
.github/workflows/test.yml: Runs tests, linting, and security checks on every PR and push.github/workflows/deploy.yml: Updates package.json version, builds and publishes Docker images after successful tests on main branch
The deployment process automatically:
- Determines the next semantic version (patch increment)
- Updates
package.jsonwith the new version - Commits the version change to the repository
- Builds Docker images with the new version tag
- Creates a GitHub release
This ensures that the package.json version always stays in sync with the deployed Docker images and GitHub releases.
The easiest way to deploy the application is using the pre-built Docker images from GitHub Container Registry. Images are automatically versioned with semantic versioning (starting from v0.0.1):
# Pull the latest image
docker pull ghcr.io/gitgc/sim-leader:latest
# Or pull a specific version for production stability
docker pull ghcr.io/gitgc/sim-leader:v0.0.1
# Create your environment file
cp .env.example .env
# Edit .env with your configuration
# Run with Docker (standalone) - latest
docker run -p 3001:3001 --env-file .env ghcr.io/gitgc/sim-leader:latest
# Run with Docker (standalone) - specific version
docker run -p 3001:3001 --env-file .env ghcr.io/gitgc/sim-leader:v0.0.1
# Or run with docker-compose (recommended)
# Edit docker-compose.yml to use the pre-built image:
# image: ghcr.io/gitgc/sim-leader:v0.0.1 # Use specific version for production
# image: ghcr.io/gitgc/sim-leader:latest # Or use latest for development
# Comment out the 'build: .' line
docker compose up -d- Cloud Platforms: Deploy directly to AWS, Google Cloud, Azure, or DigitalOcean
- Container Orchestration: Use with Kubernetes, Docker Swarm, or Nomad
- PaaS Providers: Deploy to Heroku, Railway, Render, or similar platforms
- Self-hosted: Run on your own servers with Docker Compose
- Configure environment variables in
.env - Set up Google OAuth credentials
- Configure authorized emails
- Set up PostgreSQL database
- Configure domain and SSL (for production)
- Test the deployment
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is for educational and personal use.
Cannot access the application:
- Verify your chosen mode matches your access URL
- Check that Caddy container is running:
docker compose ps - For HTTPS modes, ensure CADDY_EMAIL is set in your .env file
OAuth login fails:
- Update Google OAuth redirect URIs to match your load balancer URL
- For HTTP mode:
http://localhost/auth/google/callback - For HTTPS production:
https://yourdomain.com/auth/google/callback
SSL certificate issues (HTTPS modes):
- HTTPS Local: Browser warnings are expected with self-signed certificates
- HTTPS Production: Ensure domain points to your server and DigitalOcean DNS is configured
- Check Caddy logs:
docker compose logs caddy
Caddy configuration issues:
- Ensure the correct Caddyfile is mounted in docker-compose.yml
- For production mode, verify DOMAIN, CADDY_EMAIL, and DIGITALOCEAN_API_TOKEN are set in .env
- Check that only one Caddyfile volume mount is uncommented
- Restart services after changing Caddyfile:
docker compose down && docker compose up --build -d
Missing CADDY_EMAIL error:
- Ensure CADDY_EMAIL is set in your .env file for production mode
- Only required when using Caddyfile.https-production
Database connection issues:
- Ensure PostgreSQL container is running:
docker compose ps - Check database logs:
docker compose logs db - Verify DATABASE_URL in .env matches docker-compose service name
Migrating from local uploads directory:
If you have existing files in ./public/uploads/, you'll need to copy them to the new shared volume:
# Start the services to create the volume
docker compose up -d
# Copy existing uploads to the shared volume
docker cp ./public/uploads/. $(docker compose ps -q app):/usr/src/app/public/uploads/
# Restart services to ensure proper mounting
docker compose restartApplication won't start:
- Check all containers are running:
docker compose ps - View logs for specific services:
docker compose logs [service-name] - Ensure all required environment variables are set
Profile picture uploads fail:
- Check uploads directory permissions in the shared volume
- Verify the
uploads_sharedvolume is properly mounted - Ensure app container has write access to the shared upload volume
- Check app logs for upload-related errors:
docker compose logs app
If you encounter any issues, please check:
- All environment variables are set correctly
- Docker containers are running
- Google OAuth credentials are valid
- Authorized emails are configured properly
- Load balancer mode matches your access method