Advanced

Architecture

Technical architecture of the Trunx telecom hub including provider abstraction, event system, and data flow.

Overview

Trunx is a Hono-based Node.js application built in TypeScript (strict mode). It uses Drizzle ORM with Postgres for persistence, Redis for events and caching, and BullMQ for background jobs. A single process handles REST API routes, MCP tool serving, SSE event streaming, and job workers.

The API is defined with @hono/zod-openapi, so every route has a typed Zod schema that also generates the OpenAPI spec. The same Zod schemas are reused in MCP tool definitions and Drizzle validation.

System Diagram

Trunx Architecture

Reading the diagram

  • Solid arrows show request/data flow
  • Dashed arrows show async or background connections
  • Every external call goes through the Provider Layer — route code never touches vendor SDKs
  • The Guardrails layer sits between middleware and services — it cannot be bypassed
  • BullMQ workers are stateless — they read from Postgres, write to Postgres, publish to Redis
  • The Dashboard shares the same Postgres database but connects directly (not through the Trunx API for reads)

Infrastructure

ComponentHostRole
Trunx Server5.161.184.212:3000Hono API + MCP + SSE + BullMQ workers
Asterisk Server5.161.187.81:8088ARI WebSocket, SIP trunk → PSTN
Dashboard5.78.64.96 (Coolify)Next.js frontend, shares Postgres
PostgresNeon (managed)All persistent state
RedisioredisPub/sub, caching, queues, semaphores

Provider Abstraction

Every external service sits behind a typed interface. Route and service code never imports vendor SDKs directly -- only provider implementations inside src/providers/ touch vendor libraries.

send_sms() --> provider.sms.send()
               |-- TwilioSmsProvider
               \-- (future providers)

create_call() --> provider.voice.call()
                  |-- TwilioVoiceProvider
                  \-- (future providers)

amd.detect() --> AmdResult
                 |-- TwilioAmdProvider (cloud-side)
                 |-- AsteriskAmdProvider (local)
                 \-- (future: custom ML)

The provider registry resolves the active provider by name or by DID. Swapping a vendor means implementing the interface and updating the ACTIVE_*_PROVIDER environment variable. No route code changes.

Event System

All events flow through Redis pub/sub. No in-process event emitters.

Emit: Service code calls bus.publish(channel, event). The event is published to a Redis channel (e.g., events:sms, events:campaign:camp_abc123) and written to the events table.

Fan out: The SSE endpoint subscribes to Redis channels and pushes events to connected HTTP clients. Webhook delivery workers subscribe to the same channels independently.

Replay: If a client reconnects with a Last-Event-ID header, the SSE endpoint replays missed events from the events table before resuming the live stream.

This design supports multiple server instances (each subscribes to Redis independently), fully decoupled consumers, and guaranteed event replay on reconnect.

State Management

State lives in Postgres, not in memory. BullMQ jobs are stateless reactors.

ConceptStorageQuery
DID health scoredid_health_events tableSQL aggregation over last 50 calls per DID
Campaign progresscampaign_prospects tableCount by status (pending, called, completed)
IVR definitionsPostgres + Redis cacheCache-on-write, read from Redis at call time
Rate limit countersRedisAtomic Lua scripts

If a job worker restarts, nothing is lost. The next event triggers a fresh query against Postgres. No in-memory sliding windows, no state recovery logic.

SIP Channel Budgets

A shared SIP trunk is divided into reserved channel pools using semaphore-based allocation in Redis.

PoolChannelsPurpose
Campaign60Outbound dialer
IVR15Inbound IVR (always reserved)
API ad-hoc8One-off voice calls via API

Backpressure behavior:

  • Campaign pool full: Dialer automatically slows its call rate until channels free up.
  • IVR pool full: Inbound calls rejected with 503 (Service Unavailable).
  • API pool full: API returns 429 (Too Many Requests) with a Retry-After header.

Channel counts are configurable via environment variables. The defaults above are tuned for a single SIP trunk with 83 total channels.

Auth Flow

External API requests go through a cached auth pipeline:

Request arrives with Authorization: Bearer tk_live_... header.

Token is hashed with SHA-256.

Hash is checked against Redis cache (60-second TTL).

On cache miss, the hash is verified against the database. The result and associated scopes are cached in Redis.

Scope check runs on every request to verify the key has the required permission.

Internal service calls (e.g., the campaign dialer calling the voice endpoint) bypass external auth using a service-internal token. This avoids unnecessary hashing and database lookups on the hot path.

IVR Cache Strategy

IVR definitions are cached in Redis using a cache-on-write strategy:

  1. When an IVR is created or updated via the API, the definition is written to Postgres and simultaneously cached in Redis.
  2. When Asterisk receives an inbound call, the ARI handler reads the IVR definition from Redis.
  3. The Trunx hub is never in the inbound call critical path -- Asterisk reads directly from Redis.

IVR definitions are small (typically under 10KB) and change rarely, making this strategy efficient. Cache invalidation happens only on explicit update.

Guardrails Layer

Guardrails are infrastructure-level enforcement. They cannot be bypassed by API callers -- every outbound action passes through the guardrails layer before reaching a provider.

On this page