Research: Centralising Signals Integration onto Minerva
| Field | Value |
|---|---|
| Type | Research |
| Status | Active |
| Author | xRED Dev Team |
| Created | 2026-04-08 |
| Related ADR | ADR-005, ADR-006, ADR-009 |
| Related Research | Signals Python SDK, DataRiver Facade |
1. Problem Statement
The current architecture for programmatic Signals API access is fragmented across multiple platforms and teams:
AutoLab (Camunda)
→ AAB Signals (LabEx)
→ signals-python-sdk (LabEx)
→ MuleSoft (DataRiver)
→ Signals ELN (Revvity)Pain points:
- MuleSoft certificates are stored in AutoLab’s Vault — xRED cannot self-service rotate them; must ask the AutoLab platform team
- Environment mappings (AutoLab DEV/QA/STG/PROD → Signals tenants) are managed by AutoLab, not xRED
- SDK and AAB are maintained by LabEx — xRED is a consumer with limited influence over priorities (e.g. only 2 of 9 templates hardcoded, no table write operations)
- Rate limit is 300 calls/hour at the MuleSoft layer — very restrictive for interactive use cases
- No caching — every call goes through the full chain, consuming the shared tenant quota
- Python 3.9 lock on the AAB, Pydantic v1 on the SDK — technology constraints imposed by upstream dependencies
- Camunda dependency — workflow orchestration tied to AutoLab’s Camunda instance, even for simple operations that don’t need workflow orchestration
2. What Would Centralising Mean?
Move the Signals API integration layer onto xRED’s Minerva platform, giving the team direct control over credentials, environments, caching, and the API surface exposed to consumers.
Current state
AutoLab Camunda → AAB (LabEx) → SDK (LabEx) → MuleSoft → Signals
↕
AutoLab Vault
(certs, client secrets)Target state
Any consumer (AutoLab, xRED apps, Jobs)
↓
xRED Signals Service (Minerva)
├── credentials in xRED Vault
├── response cache (ElastiCache)
└── environment config (xRED-managed)
↓
MuleSoft (DataRiver) → Signals ELN3. Options
Option A: Thin API wrapper on Minerva
Deploy a FastAPI service on Minerva that wraps the MuleSoft Experiments API. This service becomes the single point of contact for all xRED Signals API operations.
What it does:
- Holds MuleSoft credentials (client ID, secret, certificate) in xRED’s Vault
- Manages environment configuration (which MuleSoft URL for which env)
- Implements response caching (ADR-009)
- Exposes a clean API to consumers (AutoLab AAB, xRED Apps, Jobs)
- Handles token lifecycle (user Signals tokens)
Architecture:
Pros:
- xRED controls credentials, caching, and environment config
- Single place to rotate MuleSoft certs
- Caching reduces calls to MuleSoft (300/hour limit) and Signals (1,000/min quota)
- Can expose a nicer API than the auto-generated SDK (better method names, Pydantic v2)
- AutoLab AAB becomes a thin client calling xRED’s service instead of MuleSoft directly
- Can add operations not in the SDK (e.g. property updates via LEM Proxy)
- Python version and dependencies under xRED’s control
Cons:
- Another service to maintain and operate
- Additional network hop (consumer → xRED service → MuleSoft → Signals)
- Need to handle user token pass-through (the Signals Bearer token is user-scoped)
- AutoLab team needs to change AAB to call xRED’s service instead of MuleSoft
Option B: Fork the SDK into xRED
Fork signals-python-sdk into the xRED monorepo. Maintain it as a local dependency.
Pros:
- Full control over the code, dependencies, Python version
- Can add custom methods, caching wrappers, better error handling
- No new service to deploy — it’s a library
Cons:
- Still need MuleSoft certs somewhere (either in every consumer or in a shared secret)
- No centralised caching — each consumer caches independently (or doesn’t)
- Must track upstream SDK changes and merge them
- AutoLab AAB would need to switch to the forked SDK or remain on LabEx’s version
- Doesn’t solve the credential management problem
Option C: Build a custom Signals client (no SDK)
Write a purpose-built Signals API client using httpx that calls MuleSoft directly,
without the auto-generated SDK.
Pros:
- Clean, hand-written code with proper method names
- Pydantic v2, modern Python, async support
- Only implement the operations xRED actually needs
- Can include caching, retry logic, circuit breakers natively
Cons:
- Must maintain the MuleSoft API contract manually (no auto-generation)
- Same credential management issues as Option B
- More upfront development effort
- AutoLab integration still needs addressing separately
Option D: Hybrid — Minerva service with custom client
Combine Option A and C: deploy a Minerva service that uses a hand-written client (not the auto-generated SDK) to call MuleSoft.
Pros:
- Best of both: centralised credentials/caching + clean client code
- Can combine DataRiver Facade operations with LEM Proxy operations in one service
- Single API surface for all consumers
- Full control over everything
Cons:
- Most development effort
- Must maintain MuleSoft API contract manually
4. What the Service Would Expose
Based on current and planned xRED use cases:
| Operation | Source | Priority |
|---|---|---|
| Create experiment from template | DataRiver Facade | High |
| Attach files to experiment | DataRiver Facade | High |
| Close experiment | DataRiver Facade | High |
| Get experiment details | DataRiver Facade | High |
| Search experiments | DataRiver Facade | Medium |
| Read/write table rows | DataRiver Facade | Medium |
| Update experiment properties | LEM API Proxy (not in facade) | High |
| Get templates | DataRiver Facade | Medium |
| Sign experiment | LEM API Proxy (not in facade) | Future |
| User/group lookup | LEM API Proxy (not in facade) | Future |
Operations not available in the DataRiver Facade would go through the LEM API Proxy (Route 2 in ADR-005) — the xRED service abstracts this routing from consumers.
5. Migration Path
A phased approach to avoid disrupting existing AutoLab workflows:
Phase 1: Deploy the service (parallel)
- Deploy xRED Signals Service on Minerva
- Store MuleSoft creds in xRED’s Vault
- Implement the core operations (create, attach, close, get)
- Add response caching
- xRED Apps and Jobs start using the new service
- AutoLab AAB continues using the SDK directly (unchanged)
Phase 2: Onboard AutoLab (gradual)
- Work with LabEx to point AAB at xRED’s service instead of MuleSoft directly
- Or: AutoLab calls xRED’s service via its own AAB wrapper
- AutoLab no longer needs MuleSoft certs in its Vault
Phase 3: Extend beyond the facade
- Add LEM Proxy operations (property updates, signing)
- Add operations the facade doesn’t cover
- Build towards the full API surface xRED needs
6. Open Questions
- Does AutoLab / LabEx have plans to centralise their Signals integration independently?
- Would the LEM team grant xRED a dedicated MuleSoft client with higher rate limits if we demonstrate caching reduces the call volume?
- Should this service also handle the External Data Source path (Lookups → internal systems), or only the outbound Signals API path?
- Is the 300 calls/hour MuleSoft limit per client application or per client certificate? If per-app, a new xRED client could get its own budget.
- What is the timeline for DataRiver’s planned expansion (cache, data lake, event streams)? If it’s imminent, building our own cache may be redundant.
7. Recommendation
Option A (thin API wrapper) or Option D (hybrid with custom client) depending on appetite for development effort. The key wins are:
- Credential control — MuleSoft certs in xRED’s Vault, self-service rotation
- Centralised caching — one cache serving all consumers, protecting the rate limit
- Clean API surface — consumers don’t deal with MuleSoft auth or SDK quirks
- Environment management — xRED controls the mapping, not AutoLab
Start with Option A using the existing SDK internally (least effort), migrate to Option D (custom client) if the SDK’s limitations (Pydantic v1, awkward names) become blocking.