This page captures the forward-looking roadmap for KaireonAI. It is not a delivery contract — dates are deliberately omitted and priorities shift as we learn from pilot deployments. Near-term items are what the team is actively working on; mid-term is queued; parked items are intentionally deferred until we see pull for them. For what’s already shipped, see the changelog.Documentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
Near-term (active backlog)
Work either in flight or next up. Everything here is gated by its own success criteria rather than a fixed date.Fairness hard-gate (block flow activation on disparate impact)
Today the platform reports fairness metrics —/api/v1/fairness/evaluate
computes disparate impact, equal opportunity, demographic parity, and
counterfactual fairness; /api/v1/fairness/report returns historical
trends. Operators see the numbers but nothing prevents a decision flow
from going live when those metrics breach a configured threshold.
Public-sector, healthcare, and regulated-finance personas (credit,
underwriting) cannot deploy without this hard-block.
Intended scope:
- Per-tenant fairness policy (configurable thresholds per metric + per protected-attribute set) stored alongside the existing ranking profile so policy changes ship through the same governance flow.
- Block-on-publish enforcement in
decision-flows/[id]/publish— evaluate the policy against the latest decision-trace sample; if any threshold is breached, return 422 with a structured violation report and refuse to mark the flowpublished. - Continuous re-evaluation via a scheduled fairness-check cron that
flips a published flow back to
pausedif metrics drift past the threshold post-launch (operator opt-in). - Override workflow — an explicit four-eyes governance approval (re-using the existing approvals workflow) that lets a documented human override unblock publish or keep-live for cases where the threshold breach is justified.
Dedicated /api/v1/ai/explain HTTP route
The Executive Dashboard’s Explain button on each anomaly row today
falls back to a deterministic summary built from the anomaly tuple’s
fields. The full LLM-narrated explanation pipeline is already
implemented internally but is not yet exposed as a public HTTP endpoint.
Wiring up /api/v1/ai/explain unlocks richer, causally-framed
explanations on demand for both the dashboard and external integrations.
Pilot guardrails for reports + alerts
A set of safety rails to ship before enabling EventBridge automation by default:- Per-tenant report-schedule cap — prevent a single tenant from scheduling 50 daily LLM-narrated reports.
- Minimum schedule cadence — reject cron expressions that would fire more often than once per hour.
- Default LLM-narrative opt-out — new schedules default to narrative-disabled; operators explicitly opt in.
- Per-tenant monthly LLM spend cap — track narrator invocations against a budget; degrade gracefully to narrativeless reports when exceeded.
EventBridge automation wire-up
Once guardrails are in place, move alerts and report schedules from on-demand (Run Now, manual tick) to automated background evaluation via AWS EventBridge →/api/cron/tick. See
EventBridge Setup for the mechanics;
the wire-up is optional during pilot and will remain optional for
self-hosted deployments.
S3 artifact storage for large report runs
Today,ReportRun.artifactPayloads stores artifacts inline as
base64-encoded JSON. This is fine for small PDFs and CSVs but becomes a
database-bloat risk for large runs. Moving artifacts to an S3 bucket
(with a signed-URL download path) keeps the database lean and handles
arbitrary payload sizes. Gated on a planned REPORTS_S3_BUCKET env var; inline
mode remains the default.
Mid-term
Queued work. Not in progress yet, but sequenced to land after the near-term items.Slack / Teams interactive components
Approve / dismiss buttons directly in the notification payload so on-call engineers can acknowledge an alert or mark a report as reviewed without leaving the conversation. Today both adapters render read-only messages.Scheduled screenshot-style dashboard PDFs
Current exports are data-driven — the runner regenerates the report from the underlying data sources at run time. Complement this with layout-capture PDFs that mirror the dashboard’s rendered UI pixel-for-pixel. Useful for executive packs where the rendered visualisation is the deliverable.Custom dashboard builder
A drag-and-drop dashboard composer where operators pick widgets (KPI card, time-series chart, table, heatmap) from a palette and bind each to a data source. Closes the loop started by the existing dashboard Export and Save as Report affordances — any custom dashboard instantly becomes a scheduled report.Public share links with auth-free viewing
Time-limited, rotatable URLs that render a read-only dashboard (or report run) without requiring a login. Useful for ad-hoc external sharing (board decks, investor updates) without provisioning new user accounts.Terraform / CDK modules for EventBridge and App Runner
Codify the manual AWS Console setup documented in EventBridge Setup as reusable IaC modules. Covers the scheduler rule, IAM role, Parameter Store entry, and App Runner environment-variable wiring so a new pilot can promote to fully automated in oneterraform apply.
Parked / future
Intentionally deferred. Listed so there’s no ambiguity about what we’re not working on — if any of these become a hard blocker for a pilot, let us know and we’ll re-prioritise.Mobile SDKs
Native iOS / Android SDKs for client-side Recommend + Respond. Parked until we see meaningful pull from mobile-first pilots; the REST APIs are fully usable from native HTTP clients today.Shared read-only demo tenant
A single always-on tenant with pre-loaded decisioning scenarios so prospective users can evaluate the platform without creating an account. Parked until user volume onplayground.kaireonai.com makes
the read-only sandbox cost-effective to maintain.