ML Service (Python)
Getting started and overview for wordloop-ml
ML Service (Python)
wordloop-ml operates as the platform's stateless asynchronous execution engine. It is responsible for audio processing payloads, interfacing securely with ML APIs (such as AssemblyAI), and normalizing telemetry constraints.
The service exposes a synchronous REST interface via FastAPI but primarily executes within a custom worker consuming AsyncAPI Pub/Sub events.
Architecture & Layout
The Python stack adheres to pure Clean Architecture logic and utilizes modern Python (3.12+).
Local Development Workflow
Our Python architecture relies entirely on uv for ultra-fast, predictable dependency management and virtual environments.
-
Start Platform Dependencies
(Boots Emulators, Observability dashboard, and the Core Go service)
-
Boot the API Server
-
Boot the Async Worker (Pub/Sub)
Development Guidelines
- Pydantic Everywhere: Use Pydantic models to strictly serialize, deserialize, and validate I/O boundaries.
- Service Identity & Core Interaction: When writing back to Core, ML must inject the
SERVICE_AUTH_TOKENgenerated via./dev setup.
Never bypass interface restrictions. Always examine the ML Architecture Rules before injecting new dependency chains into a workflow.