ADR-006: Monolith vs Microservices for Integration Adapters
| Field | Value |
|---|---|
| Status | Open |
| Date | 2026-04-07 |
| Related SAD | SAD-001 |
| Related Research | ORCA Platform |
Context
The xRED ELN integration layer currently deploys each integration as a separate
application within a monorepo (xred-eln-integrations). Each app in apps/ has its own
FastAPI backend, its own Dockerfile, its own K8s deployment, and its own ingress. This is
effectively a microservices approach — independent deployments per integration.
The ORCA platform team took the opposite path: they started with separate services but
converged on a modular monolith for their External Actions (the signals-fish-integration
app handles FISH, MaxSMR, IRCI Score, and SMART in a single deployable). For External Data
sources, they also consolidated into a single signals-external-sources FastAPI service
serving TaPIR, SMDI, and ChemLot through dynamic router discovery.
As xRED adds more integrations (currently: demo, poc-signals), this decision will increasingly affect developer velocity, operational cost, and architectural complexity.
Options Under Consideration
Option A: Microservices (current approach)
Each integration is a standalone FastAPI app with its own deployment.
apps/
├── demo/ → deployment + service + ingress
├── poc-signals/ → deployment + service + ingress
├── integration-3/ → deployment + service + ingress
└── integration-4/ → deployment + service + ingressArguments for:
- Complete isolation — a bug in one integration cannot crash another
- Independent scaling — scale only what needs scaling
- Independent deployment — ship one integration without touching others
- Independent tech choices — an integration could use a different framework if needed
- Simpler mental model per integration — each app is self-contained
- Aligns with the monorepo CI (only changed apps are built/tested)
Arguments against:
- Operational overhead grows linearly — each integration needs its own K8s deployment, service, ingress, ArgoCD application, Gravitee API definition, secrets
- Duplicated boilerplate — auth middleware, health endpoints, error handling, logging configuration, Signals client code repeated across every app
- Resource cost — each pod consumes a base amount of CPU/memory even at idle
- More infrastructure repo YAML to maintain per integration
- Gravitee configuration is click-ops (no IaC) — each new service means manual API setup
- Signals OAuth per service — each app needs its own Client ID and redirect URIs registered with Revvity, its own callback handler, its own token management
Option B: Modular monolith (ORCA’s approach)
All integrations share a single deployable, with internal modularity via FastAPI routers.
app/
├── main.py → shared app, middleware, health
├── core/ → shared auth, Signals client, error handling
├── integrations/
│ ├── demo/ → router + business logic
│ ├── poc_signals/ → router + business logic
│ ├── integration_3/ → router + business logic
│ └── integration_4/ → router + business logic
└── Dockerfile → single imageArguments for:
- Single deployment — one K8s deployment, one service, one ingress, one Gravitee API
- Shared infrastructure code — auth, logging, error handling, Signals client written once
- Single Signals OAuth flow — one Client ID, one set of redirect URIs, one callback handler. With microservices, each app needs its own Client ID and redirect URIs registered with Revvity, multiplying the credential management burden
- Lower resource cost — one pod serving all integrations
- Faster onboarding of new integrations — add a router module, not an entire app scaffold
- ORCA validated this approach and chose it deliberately (ADR 0021, 0026)
- Dynamic router discovery (
discover_and_include_routers) makes it plug-and-play
Arguments against:
- Coupling risk — a bad deployment affects all integrations simultaneously
- Scaling is all-or-nothing — cannot scale one integration independently
- Larger blast radius for failures — memory leak in one router takes down the whole service
- Shared dependency versions — all integrations must agree on the same library versions
- Testing complexity — full test suite runs for any change
Option C: Hybrid — two monoliths by integration type
Split along the Signals integration mechanism boundary: one service for External Data (Lookups), one for External Actions (Apps).
apps/
├── lookups/ → single FastAPI serving all External Lists/Tables/Chemical Sources
│ ├── demo/
│ ├── compound_lookup/
│ └── project_lookup/
└── actions/ → single FastAPI/React app serving all External Actions
├── fish_import/
└── calculation_request/Arguments for:
- Groups integrations by shared infrastructure needs (auth model, Signals contract)
- Lookups share the same Basic Auth → Gravitee path; Actions share dual OAuth + ElastiCache
- Limits blast radius to one integration type
- Two Gravitee API definitions instead of N
- Matches the architectural split in SAD-001 (Lookups vs Apps are fundamentally different)
Arguments against:
- Still some coupling within each monolith
- Requires deciding where edge cases belong (e.g. background jobs)
- Two deployment pipelines to maintain instead of one
Decision Criteria
| Criterion | Weight | Microservices | Monolith | Hybrid |
|---|---|---|---|---|
| Operational overhead per integration | High | Worst | Best | Good |
| Failure isolation | High | Best | Worst | Good |
| Developer velocity for new integrations | High | Medium | Best | Good |
| Resource efficiency | Medium | Worst | Best | Good |
| Independent scaling | Medium | Best | Worst | Good |
| Code reuse (auth, logging, clients) | Medium | Worst | Best | Good |
| Gravitee config effort | Medium | Worst | Best | Good |
| Deployment independence | Low | Best | Worst | Good |
Current State
The current approach is Option A (microservices) by default — the monorepo scaffold creates a new app directory per integration. With only 2 integrations deployed, the operational overhead is manageable. The decision becomes more consequential as the number of integrations grows.
Open Questions
- How many integrations do we realistically expect in the next 12 months?
- Are there integrations that need independent scaling (e.g. high-traffic lookups)?
- How painful is the current Gravitee click-ops setup per new service?
- Would a shared
core/library within the monorepo (without merging into one app) address the code duplication concern while keeping separate deployments? - Should background jobs (Jobs component in SAD-001) follow the same pattern or remain separate by nature?
Decision
Pending. This ADR remains open until the team has more data on integration count and operational friction.