Wordloop Platform
Platform Services

ML Service (Python)

Getting started and overview for wordloop-ml

ML Service (Python)

wordloop-ml operates as the platform's stateless asynchronous execution engine. It is responsible for audio processing payloads, interfacing securely with ML APIs (such as AssemblyAI), and normalizing telemetry constraints.

The service exposes a synchronous REST interface via FastAPI but primarily executes within a custom worker consuming AsyncAPI Pub/Sub events.

Architecture & Layout

The Python stack adheres to pure Clean Architecture logic and utilizes modern Python (3.12+).

services/wordloop-ml/
├── src/wordloop/
│   ├── core/domain/         # Pydantic state models (No logic leaks)
│   ├── core/gateways/       # typing.Protocol interface definitions
│   ├── core/services/       # Orchestration workflows
│   ├── entrypoints/         # FastAPI Routes, Pub/Sub Worker Consumers
│   └── providers/           # Concrete external integrations (AssemblyAI, GCP)
├── tests/                   # unit/ and system/
└── pyproject.toml           # `uv` managed dependencies

Local Development Workflow

Our Python architecture relies entirely on uv for ultra-fast, predictable dependency management and virtual environments.

  1. Start Platform Dependencies

    ./dev start infra core

    (Boots Emulators, Observability dashboard, and the Core Go service)

  2. Boot the API Server

    cd services/wordloop-ml
    uv run wordloop-api
  3. Boot the Async Worker (Pub/Sub)

    cd services/wordloop-ml
    uv run wordloop-worker

Development Guidelines

  • Pydantic Everywhere: Use Pydantic models to strictly serialize, deserialize, and validate I/O boundaries.
  • Service Identity & Core Interaction: When writing back to Core, ML must inject the SERVICE_AUTH_TOKEN generated via ./dev setup.

Never bypass interface restrictions. Always examine the ML Architecture Rules before injecting new dependency chains into a workflow.

On this page