A lightweight FastAPI service for serving machine learning classification models — containerized and ready for local development with S3-compatible storage (MinIO).
- ✅ FastAPI web server with
/predictendpoint - ✅ Loads a serialized ML model (
.pkl) for live predictions - ✅ Integrates with S3-compatible storage (MinIO)
- ✅ Local Docker Compose setup for full-stack dev
- ✅ Clean Python package structure (
src/clarion/)
git clone https://github.com/chishxd/clarion.git
cd clarion# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate
# Install in editable mode
pip install -e .
# Run the API server with hot reload
uvicorn clarion.app:app --reloadRun the API server + MinIO together:
# Make sure you have a .env file with your MinIO credentials:
echo "AWS_ACCESS_KEY_ID=admin" > .env
echo "AWS_SECRET_ACCESS_KEY=root@123" >> .env
# Then build and run containers
docker compose up --build -d- Clarion API: http://localhost:80
- MinIO Console: http://localhost:9001
curl -X POST "http://localhost:80/predict" \
-H "Content-Type: application/json" \
-d '{"PassengerId": 123, "Pclass": 1, ...}'Example Response
{
"Survived": 1
}Run unit tests with pytest:
pytest- Local FastAPI prediction endpoint
- Model versioning & download from MinIO
- Docker Compose setup
- Production cloud deployment (On hold — budget-dependent)
Distributed under the MIT License.
Clarion is built as part of the Cloud Computing for Data Science curriculum — and follows up on AutoCleanSE.
- Keep
.envin.gitignore— don’t commit credentials. - For production, use real secrets + a secure MinIO setup.
- Current version is for local dev & student projects — not hardened for prod yet.