An intelligent Scrum Master assistant built with FastAPI and powered by multiple AI providers (Anthropic Claude, OpenAI GPT, Google Gemini, Groq, Ollama). This RESTful API helps agile teams manage sprints, facilitate standups, conduct retrospectives, and improve team productivity.
- RESTful API: Clean, modern FastAPI-based REST API
- Multi-Provider AI Support: Choose from Anthropic Claude, OpenAI GPT, Google Gemini, Groq, or local Ollama
- AI-Powered Insights: Leverage cutting-edge LLMs for intelligent scrum assistance
- Sprint Management: Create and track sprints with goals and velocity metrics
- Daily Standups: Record and analyze standup updates with blocker detection
- Retrospectives: Conduct retrospectives with AI-generated insights
- Story Estimation: Get AI-assisted story point estimation guidance
- SQLite Database: Simple, file-based persistence
- Buildpack Support: Deploy with Cloud Native Buildpacks or Google Cloud Build
- Interactive Docs: Auto-generated Swagger UI and ReDoc documentation
- Python 3.11+
- API key for your chosen AI provider:
- Anthropic (default): Get API key
- OpenAI: Get API key
- Google Gemini: Get API key
- Groq: Get API key
- Ollama: Install locally (no API key needed)
- Create and activate virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Configure environment:
cp .env.example .envEdit .env and configure your AI provider:
Option 1: Anthropic Claude (default)
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-your-key-here
MODEL_NAME=claude-sonnet-4-5-20250929Option 2: OpenAI GPT
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here
MODEL_NAME=gpt-4oOption 3: Google Gemini
LLM_PROVIDER=gemini
GOOGLE_API_KEY=your-key-here
MODEL_NAME=gemini-1.5-proOption 4: Groq (fast inference)
LLM_PROVIDER=groq
GROQ_API_KEY=your-key-here
MODEL_NAME=llama-3.1-70b-versatileOption 5: Ollama (local, free)
LLM_PROVIDER=ollama
MODEL_NAME=llama2
# No API key needed - make sure Ollama is running: ollama serveđź“– See docs/LLM_PROVIDERS.md for detailed configuration guide
- Run the application:
python main.pyThe API will start at http://localhost:8000
To integrate with Slack:
Option 1: Socket Mode (Recommended for Development)
- No public URL required
- See Slack Setup Guide
Option 2: Webhooks with ngrok (HTTP Mode)
- Use ngrok for local development with webhooks
- See Ngrok + Webhook Setup Guide
Quick start with ngrok:
# Start FastAPI and ngrok together
./scripts/start-dev.sh
# Or start them separately
# Terminal 1:
python main.py
# Terminal 2:
./scripts/start-ngrok.shFor local development with PostgreSQL database:
- Start PostgreSQL database:
docker-compose up -d- Update
.envfile with PostgreSQL connection:
DATABASE_URL=postgresql://rum:Rum1234@localhost:2345/rum- Run the application locally:
python main.py- Stop database:
docker-compose down- Stop and remove volumes (clean database):
docker-compose down -vThe docker-compose provides:
- PostgreSQL 16 database with persistent volume
- Accessible at
localhost:5432 - Default credentials:
scrum_user/scrum_password - Database name:
scrum_master
Once running, access the interactive documentation:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
- OpenAPI JSON: http://localhost:8000/openapi.json
ai-agent/
├── src/
│ ├── api/
│ │ └── routes/
│ │ ├── health.py # Health check endpoints
│ │ ├── sprints.py # Sprint CRUD operations
│ │ ├── standups.py # Standup management
│ │ └── retrospectives.py # Retrospective endpoints
│ ├── agent/
│ │ └── scrum_master.py # AI agent logic
│ ├── models/
│ │ ├── base.py # SQLAlchemy base
│ │ ├── sprint.py # Sprint model
│ │ ├── standup.py # Standup model
│ │ └── retrospective.py # Retrospective model
│ ├── schemas/
│ │ ├── sprint.py # Sprint Pydantic schemas
│ │ ├── standup.py # Standup Pydantic schemas
│ │ └── retrospective.py # Retrospective Pydantic schemas
│ ├── storage/
│ │ └── database.py # Database configuration
│ ├── config.py # Application settings
│ └── main.py # FastAPI app factory
├── main.py # Application entry point
├── requirements.txt # Python dependencies
├── Procfile # Process definition for buildpack
├── makefile # Build and deployment commands
├── docker-compose.yml # PostgreSQL for local dev
├── .env.example # Example environment variables
└── README.md # This file
GET /health- Health check with app infoGET /- Root endpoint with API information
POST /api/v1/sprints- Create a new sprintGET /api/v1/sprints- List all sprints (with optional filters)GET /api/v1/sprints/{sprint_id}- Get sprint detailsPATCH /api/v1/sprints/{sprint_id}- Update a sprintDELETE /api/v1/sprints/{sprint_id}- Delete a sprint
POST /api/v1/standups- Submit a standup updateGET /api/v1/standups- List standups (with filters: user, sprint, blockers, date range)GET /api/v1/standups/{standup_id}- Get standup detailsDELETE /api/v1/standups/{standup_id}- Delete a standup
POST /api/v1/retrospectives- Create retrospective with AI insightsGET /api/v1/retrospectives- List retrospectives (with sprint filter)GET /api/v1/retrospectives/{retro_id}- Get retrospective detailsDELETE /api/v1/retrospectives/{retro_id}- Delete a retrospective
curl -X POST "http://localhost:8000/api/v1/sprints" \
-H "Content-Type: application/json" \
-d '{
"name": "Sprint 1",
"goal": "Implement user authentication",
"start_date": "2025-11-15T00:00:00",
"end_date": "2025-11-29T00:00:00",
"team_capacity": 80,
"committed_points": 34
}'curl -X POST "http://localhost:8000/api/v1/standups" \
-H "Content-Type: application/json" \
-d '{
"user_id": "user123",
"user_name": "John Doe",
"yesterday": "Completed user login API endpoint",
"today": "Working on password reset functionality",
"blockers": "Need database migration approval",
"sprint_id": 1
}'curl -X POST "http://localhost:8000/api/v1/retrospectives" \
-H "Content-Type: application/json" \
-d '{
"sprint_id": 1,
"conducted_by": "Scrum Master",
"went_well": {
"items": ["Good team collaboration", "Met sprint goals"]
},
"went_wrong": {
"items": ["Some tasks took longer than estimated"]
},
"improvements": {
"items": ["Better story breakdown", "More frequent code reviews"]
},
"action_items": {
"items": ["Schedule estimation workshop", "Set up automated testing"]
}
}'The API will automatically generate AI insights for the retrospective!
# Get only active sprints
curl "http://localhost:8000/api/v1/sprints?status_filter=active"
# Get all sprints with pagination
curl "http://localhost:8000/api/v1/sprints?skip=0&limit=10"curl "http://localhost:8000/api/v1/standups?has_blockers=true"The Scrum Master AI agent (src/agent/scrum_master.py) provides:
Analyzes daily standup updates to identify:
- Progress patterns and team focus
- Potential risks or concerns
- Blocker resolution strategies
- Actionable next steps
Helps with:
- Capacity and velocity planning
- Story prioritization
- Risk and dependency identification
- Sprint goal setting
Generates:
- Pattern and theme identification
- Root cause analysis
- Improvement suggestions
- Action item prioritization
- Success metrics recommendations
Provides:
- Story point estimates (Fibonacci scale)
- Complexity factor analysis
- Clarifying questions
- Story breakdown suggestions
All configuration is managed through environment variables in .env:
# FastAPI Settings
DEBUG=false # Enable debug mode
HOST=0.0.0.0 # Server host
PORT=8000 # Server port
# AI Configuration
ANTHROPIC_API_KEY=sk-ant-xxx # Your Anthropic API key (required)
MODEL_NAME=claude-sonnet-4-5-20250929 # Claude model to use
MAX_TOKENS=4096 # Max response tokens
TEMPERATURE=0.7 # Response creativity (0-1)
# Database
# For local development with SQLite (simple, no setup required)
# DATABASE_URL=sqlite:///./scrum_master.db
# For local development with PostgreSQL (recommended, use docker-compose)
DATABASE_URL=postgresql://scrum_user:scrum_password@localhost:5432/scrum_master
# For production (use environment-specific credentials)
# DATABASE_URL=postgresql://user:password@host:port/database
# Feature Flags
ENABLE_DAILY_STANDUP=true
ENABLE_SPRINT_PLANNING=true
ENABLE_RETROSPECTIVES=true
# CORS
CORS_ORIGINS=* # Allowed CORS origins# Install test dependencies
pip install pytest pytest-asyncio
# Run tests
pytest tests/# Format code
black src/
# Lint code
flake8 src/
# Type checking
mypy src/The application uses SQLAlchemy and creates tables automatically on startup. For production, consider using Alembic:
# Initialize Alembic
alembic init alembic
# Create migration
alembic revision --autogenerate -m "Initial migration"
# Apply migration
alembic upgrade headBuild and deploy to Google Artifact Registry using Cloud Build:
# Build and push to artifact registry
make gcloud-build
# Or with full command
gcloud builds submit --async --tag <region>-docker.pkg.dev/<project-id>/<repository>/<image-name>:<version>Build using Paketo buildpacks locally:
# Build with buildpacks
make pack-load
# Or with full command
pack build <image-name>:<version> --builder paketobuildpacks/builder-jammy-base- Set up Python 3.11+ environment
- Install dependencies:
pip install -r requirements.txt - Configure environment variables
- Run with production server:
uvicorn src.main:app --host 0.0.0.0 --port $PORT
- Use a production database (PostgreSQL recommended)
- Set
DEBUG=false - Configure proper CORS origins
- Use HTTPS/TLS
- Set up monitoring and logging
- Configure rate limiting
- Use environment secrets management (Google Secret Manager)
- Set up database backups
- Deploy to Cloud Run or Google Kubernetes Engine
- Create a schema in
src/schemas/:
from pydantic import BaseModel
class MyFeatureCreate(BaseModel):
name: str
description: str- Create a model in
src/models/:
from src.models.base import Base, TimestampMixin
from sqlalchemy.orm import Mapped, mapped_column
class MyFeature(Base, TimestampMixin):
__tablename__ = "my_features"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str]- Create a router in
src/api/routes/:
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from src.storage.database import get_db
router = APIRouter()
@router.post("/")
async def create_feature(feature: MyFeatureCreate, db: Session = Depends(get_db)):
# Implementation
pass- Register the router in
src/main.py:
from src.api.routes import my_feature
app.include_router(my_feature.router, prefix="/api/v1/my-features", tags=["MyFeature"])API doesn't start:
- Check that port 8000 is available
- Verify Python version (3.11+)
- Ensure all dependencies are installed
Database errors:
- Database file is created automatically
- Check file permissions in the working directory
- For production, use PostgreSQL with proper connection string
AI responses fail:
- Verify
ANTHROPIC_API_KEYis set correctly - Check API key has sufficient credits
- Ensure internet connectivity for API calls
Buildpack deployment issues:
- Ensure Python version is specified correctly (3.11+)
- Check that
requirements.txtis in the root directory - Verify Procfile syntax is correct
- For Google Cloud Build, ensure proper project permissions
See the /docs folder for additional documentation:
- Slack Integration Guide - Complete Slack app setup with Socket Mode
- Ngrok + Webhook Setup - Setup ngrok tunnels for webhook development
- Jira API Integration
- Trello API Integration
- CrewAI Integration
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
MIT License - see LICENSE file for details
For issues and questions:
- Open an issue on GitHub
- Check the API documentation at
/docs - Review the existing documentation in
/docsfolder
Built with FastAPI, Claude AI, and SQLAlchemy