Hannibal Production Deployment Guide
Frontend Docker Build Command
When deploying the frontend to production (HannibalProd), use the following command with ALL required build arguments:
ssh HannibalProd "cd /root/Hannibal/frontend && docker build \
--build-arg NEXT_PUBLIC_API_URL=https://hannibal.forsyt.io/api \
--build-arg NEXT_PUBLIC_WS_URL=wss://hannibal.forsyt.io/api \
--build-arg NEXT_PUBLIC_AI_CHAT_URL=https://hannibal.forsyt.io/ai \
--build-arg NEXT_PUBLIC_VAPID_PUBLIC_KEY=BHv_9XT4i3MaXLdxNRoJT4_EcC3tffV1U_SmySSJRdiwpAKvRVCZZkhDvQR3mKm8H8Gm21DfXlbCZGqPrnJoLn0 \
--build-arg NEXT_PUBLIC_WEB3AUTH_CLIENT_ID=BDuwvIKfD27u5rw7RxuYoPpdolEWbE4amwitp8ec_CyYSsrye6rXJrUeP3YEvhayrH_7VdSiJDcgWXuArdkvbzs \
-t hannibal-frontend . && \
docker stop hannibal-frontend && \
docker rm hannibal-frontend && \
docker run -d --name hannibal-frontend --network hannibal_default -p 3000:3000 hannibal-frontend"
Required Build Arguments
| Argument | Value | Description |
|---|---|---|
NEXT_PUBLIC_API_URL | https://hannibal.forsyt.io/api | Backend API URL |
NEXT_PUBLIC_WS_URL | wss://hannibal.forsyt.io/api | WebSocket URL |
NEXT_PUBLIC_AI_CHAT_URL | https://hannibal.forsyt.io/ai | AI Chat service URL (proxied through nginx) |
NEXT_PUBLIC_VAPID_PUBLIC_KEY | BHv_9XT4i3MaXLdxNRoJT4_EcC3tffV1U_SmySSJRdiwpAKvRVCZZkhDvQR3mKm8H8Gm21DfXlbCZGqPrnJoLn0 | Push notification VAPID key |
NEXT_PUBLIC_WEB3AUTH_CLIENT_ID | BDuwvIKfD27u5rw7RxuYoPpdolEWbE4amwitp8ec_CyYSsrye6rXJrUeP3YEvhayrH_7VdSiJDcgWXuArdkvbzs | CRITICAL - Web3Auth client ID for authentication |
⚠️ IMPORTANT
The NEXT_PUBLIC_WEB3AUTH_CLIENT_ID is CRITICAL for authentication to work.
If this value is missing or incorrect, users will see:
- "Authentication service error"
- "Unable to connect to authentication service"
Always use the exact value above:
BDuwvIKfD27u5rw7RxuYoPpdolEWbE4amwitp8ec_CyYSsrye6rXJrUeP3YEvhayrH_7VdSiJDcgWXuArdkvbzs
Backend Deployment
The backend is deployed separately. To rebuild and restart:
ssh HannibalProd "cd /root/Hannibal/backend && \
docker build -t hannibal-backend . && \
docker stop hannibal-backend && \
docker rm hannibal-backend && \
docker run -d --name hannibal-backend --network hannibal_default \
--env-file .env \
-p 3001:3001 \
hannibal-backend"
Backend Environment Variables
The backend .env file must include these sections:
Core (always required)
| Variable | Description |
|---|---|
DATABASE_URL | PostgreSQL connection string (use postgres as host for Docker) |
REDIS_URL | Redis connection string (use redis as host for Docker) |
JWT_SECRET | JWT signing secret (min 32 chars) |
WEB3AUTH_CLIENT_ID | Web3Auth client ID for auth verification |
CORS_ORIGINS | Comma-separated allowed origins |
FRONTEND_URL | Frontend URL (e.g. https://hannibal.forsyt.io) |
AI Agent (Claude-based command assistant)
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY | Anthropic API key for Claude (sk-ant-...) |
Telegram Bot
| Variable | Description |
|---|---|
TELEGRAM_BOT_TOKEN | Telegram Bot API token from @BotFather |
TELEGRAM_WEBHOOK_SECRET | Secret for verifying webhook requests |
TELEGRAM_WEBHOOK_URL | Full webhook URL: https://hannibal.forsyt.io/api/ai/telegram/webhook |
Note: After adding Telegram env vars, you must set the webhook with Telegram:
curl -F "url=https://hannibal.forsyt.io/api/ai/telegram/webhook" \
-F "secret_token=YOUR_WEBHOOK_SECRET" \
https://api.telegram.org/botYOUR_BOT_TOKEN/setWebhook
Database Migrations
After pulling new code, always run migrations before rebuilding:
ssh HannibalProd "docker exec hannibal-backend npx prisma migrate deploy"
If the backend container isn't running yet, run migrations from a temporary container:
ssh HannibalProd "cd /root/Hannibal/backend && \
docker run --rm --network hannibal_default \
--env-file .env \
hannibal-backend npx prisma migrate deploy"
Full Deployment Workflow
-
Commit and push changes:
git add -A && git commit -m "Your commit message" && git push origin main -
Pull on production server:
ssh HannibalProd "cd /root/Hannibal && git pull origin main" -
Update backend .env if needed (new env vars for AI/Telegram/etc.)
-
Build and deploy backend:
ssh HannibalProd "cd /root/Hannibal/backend && \
docker build -t hannibal-backend . && \
docker stop hannibal-backend && docker rm hannibal-backend && \
docker run -d --name hannibal-backend --network hannibal_default \
--env-file .env -p 3001:3001 hannibal-backend" -
Run database migrations (if any):
ssh HannibalProd "docker exec hannibal-backend npx prisma migrate deploy" -
Build and deploy frontend:
# Use the frontend docker build command from above -
Verify deployment:
- Check the site loads: https://hannibal.forsyt.io
- Check authentication works (login with Google)
- Check API health:
curl https://hannibal.forsyt.io/api/health - Check AI command:
POST /api/ai/commandresponds - Check Telegram webhook:
GET /api/ai/telegram/statusresponds
API Routes (AI & Telegram)
| Method | Endpoint | Auth | Description |
|---|---|---|---|
| POST | /api/ai/command | Yes | Send message to AI command assistant |
| POST | /api/ai/command/confirm | Yes | Confirm a pending action (bet, points) |
| POST | /api/ai/command/cancel | Yes | Cancel a pending action |
| POST | /api/ai/telegram/webhook | Webhook secret | Telegram webhook handler |
| POST | /api/ai/telegram/link | Yes | Generate Telegram account link code |
| DELETE | /api/ai/telegram/link | Yes | Unlink Telegram account |
| GET | /api/ai/telegram/status | Yes | Check Telegram link status |
Docker Network
All containers run on the hannibal_default network:
hannibal-frontend- Next.js frontend (port 3000)hannibal-backend- Node.js backend (port 3001)hannibal-postgres- PostgreSQL databasehannibal-redis- Redis cachehannibal-ai-chat- AI Chat service (port 8000)
Nginx Configuration
Nginx proxies requests:
/→ frontend (port 3000)/api/→ backend (port 3001) — includes AI command & Telegram webhook routes/ws→ backend WebSocket (port 3001)/ai/→ AI chat service (port 8000)
The AI chat location requires SSE streaming settings:
location /ai/ {
proxy_buffering off;
proxy_cache off;
chunked_transfer_encoding on;
add_header X-Accel-Buffering no;
proxy_pass http://127.0.0.1:8000/;
proxy_http_version 1.1;
proxy_read_timeout 300;
}