Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt

Use this file to discover all available pages before exploring further.

The Score node in a Decision Flow assigns each candidate offer a numeric score that the downstream Rank node uses to decide what wins. KaireonAI ships three scoring strategies, each suited to a different stage of operational maturity:
StrategyUses ML?Best for
priority_weightedNoDay-1 deployments with no interaction history; pure configuration-driven ranking.
propensityYesOnce you have learned models or adaptation data — picks the offer most likely to convert.
formula (PRIE)Yes (component)Production decisioning that balances likelihood, contextual fit, business value, and operator emphasis.
The strategy is set on the Score node:
{
  "type": "score",
  "config": {
    "method": "formula",          // priority_weighted | propensity | formula
    "modelKey": "logistic-v3",    // only used by propensity / formula
    "formula": {                  // only used by formula
      "propensityWeight": 0.4,
      "relevanceWeight": 0.2,
      "impactWeight": 0.3,
      "emphasisWeight": 0.1
    }
  }
}

How each strategy computes a score

1. priority_weighted — deterministic, no model required

The candidate’s score is a pure product of three operator-set knobs:
score = (priority / 100) × (weight / 100) × fitMultiplier
  • priority (0–100) lives on the offer. Higher priority → higher score.
  • weight (0–100) lives on the offer. Lets you bias a specific offer up or down without touching priority (useful for short campaigns).
  • fitMultiplier is set upstream by the Qualify node based on soft (fit) rules: a candidate that passed all hard rules but failed a soft fit rule keeps fitMultiplier < 1 and is demoted.
Use it when you have no interaction history yet, you want fully predictable ranking, or you’re A/B-comparing a model against a deterministic baseline. It is also the safe fallback when a model is missing or returns degraded scores.

2. propensity — model-driven, single objective

Returns the predicted probability the customer will convert on this candidate. Resolution is hierarchical — most-specific signal wins:
  1. Per-offer adaptation with ≥ 50 positive/negative interactions → use the learned positiveRate directly.
  2. Per-offer adaptation with 1–49 interactions → blend learned rate with category/global fallback using smoothingWeight (default 10).
  3. Per-category adaptation with ≥ 20 interactions → use category warm-start prior.
  4. Global adaptation with ≥ 10 interactions → use global prior.
  5. No adaptation data → fall back to the modelKey model (scoreWithModel or ONNX runner).
Final: score = propensityScore × fitMultiplier. Use it when you want the engine to pick the most likely converter and you’ve configured a model (or accumulated enough adaptation data). The downside vs. PRIE is that two equally-likely offers tie regardless of whether one earns 10× more revenue.

3. formula — PRIE composite (multi-objective)

The recommended production strategy. Computes a weighted geometric mean of four 0–1 components:
score = P^Wp × R^Wr × I^Wi × E^We
ComponentWhat it capturesSource
P — PropensityPredicted positive rateSame hierarchical resolution as the propensity strategy (adaptation → model → 0.5 fallback).
R — RelevanceContextual fit at request timeChannel match (+0.2 if the candidate’s creative matches the request channel) and recency boost (offers updated in the last 7 days).
I — ImpactPer-offer business valuebusinessValue (40%) + clipped margin (30%) + clipped revenue (30%) when financial fields are set, otherwise pure businessValue.
E — EmphasisOperator prioritypriority / 100. The same dial as priority_weighted but exponentiated by emphasisWeight.
Weights default to 0.4 / 0.2 / 0.3 / 0.1 and are configurable per Score node OR per Ranking Profile. Each component is clamped to 1e-6 to avoid log(0) while preserving the “any 0 → 0” hard-stop semantic. For the theoretical grounding behind the four-factor split, see PRIE — Design rationale.

A worked example: same candidates, three rankings

Three candidate offers reach the Score node for the same customer:
OfferpriorityweightbusinessValuemarginmodel propensityfitMultiplier
Travel Card 1.5x80100901800.301.00
Cashback Card 2%50100601200.651.00
No-Annual-Fee Card9010040400.201.00
(Assume no adaptation data, channel match boosts R to 0.7 for the channel-targeted Travel Card and stays at 0.5 for the others, and no recent updates.)

Under priority_weighted

Travel Card     = 0.80 × 1.00 × 1.00 = 0.800
Cashback Card   = 0.50 × 1.00 × 1.00 = 0.500
No-Annual-Fee   = 0.90 × 1.00 × 1.00 = 0.900   ← winner
Highest-priority offer wins. The model’s belief that Cashback is 3× more likely to convert is ignored, and so is the fact that Travel Card earns 4.5× the margin of No-Annual-Fee.

Under propensity

Travel Card     = 0.30 × 1.00 = 0.30
Cashback Card   = 0.65 × 1.00 = 0.65   ← winner
No-Annual-Fee   = 0.20 × 1.00 = 0.20
The likeliest converter wins. Business value and operator priority drop out entirely — you can’t ship “highest-margin among the likely ones” without changing strategy.

Under formula (default weights 0.4 / 0.2 / 0.3 / 0.1)

Impact uses businessValue·0.4 + min(margin/200,1)·0.3 when financial fields are set, and emphasis is priority/100:
Travel:    P=0.30  R=0.70  I=0.63  E=0.80   →  score = 0.30^.4 × 0.70^.2 × 0.63^.3 × 0.80^.1 ≈ 0.490
Cashback:  P=0.65  R=0.50  I=0.42  E=0.50   →  score ≈ 0.527   ← winner
No-Fee:    P=0.20  R=0.50  I=0.22  E=0.90   →  score ≈ 0.287
PRIE ranks Cashback first (high propensity wins out), Travel second (lower propensity, but big business value and channel match keep it competitive), and No-Annual-Fee last (high priority but weak business value and weak model belief drag it down).

Summary — same inventory, five different winners

Swap only the strategy (and PRIE profile) and the winner moves:
StrategyPRIE weightsTravelCashbackNo-FeeWinner
priority_weighted0.8000.5000.900No-Fee
propensity0.3000.6500.200Cashback
formula — default0.40 / 0.20 / 0.30 / 0.100.4900.5270.287Cashback
formula — aggressive-margin0.15 / 0.10 / 0.70 / 0.050.5770.4600.253Travel
formula — priority-led0.10 / 0.10 / 0.10 / 0.700.6990.5040.634Travel
The Travel Card never wins under priority_weighted or propensity (it’s not the most-priority or most-likely candidate), but under PRIE with margin-heavy or priority-led weights it climbs to first because the other components compound to overcome its weaker propensity. This is the practical point of the four-factor model: the winner depends on what you choose to value, not on a fixed scoreboard.

Strategy overrides (per-channel, per-category, per-profile)

You can change strategy on a per-candidate basis without writing two flows.

Channel overrides

ScoreNodeConfig.channelOverrides[] lets you pin a different method, modelKey, or formula for candidates whose channelId matches. Example: keep propensity for the in-app channel where the model is mature, but use priority_weighted for direct mail where you have no signal:
{
  "method": "propensity",
  "modelKey": "logistic-v3",
  "channelOverrides": [
    { "channelId": "ch_direct_mail", "method": "priority_weighted" }
  ]
}

Ranking Profile (strategy profile)

strategyProfileId references a Ranking Profile that owns the four PRIE weights (mapped: conversion → Wp, recency → Wr, margin → Wi, fairness → We). Swapping profiles re-balances the formula without editing the flow:
{
  "method": "formula",
  "strategyProfileId": "rp_aggressive_margin"
  // PRIE weights come from RankingProfile.weights — inline `formula` is ignored
}

Strategy overrides (most specific match wins)

strategyOverrides[] lets you pick a different profile per productType, category, or channel. First match wins (in the order productType → category → channel):
{
  "method": "formula",
  "strategyProfileId": "rp_balanced",
  "strategyOverrides": [
    { "scope": "category", "value": "loans", "profileId": "rp_aggressive_margin" },
    { "scope": "channel",  "value": "ch_sms", "profileId": "rp_high_priority" }
  ]
}
A Loan offer reaching the Score node gets rp_aggressive_margin’s weights; everything else gets rp_balanced. Per-candidate-route — same flow, different scoring lens.

Score panel UI — common traps

The Studio’s Score-node panel always renders the PRIE weight fields (P, R, I, E) regardless of the selected method. This is intentional — the weights are stored on the node so swapping back to method: "formula" doesn’t lose them — but it can mislead first-time operators. Two specific traps to watch for:
  • “Scoring Strategy = None (use inline weights above)” — this is the default and means there’s no ranking-profile override. The weights you see in the panel ARE the active PRIE weights. Pick a profile from this dropdown to switch — the profile’s weights then drive scoring and the inline values become inert (still stored, still visible, no longer used).
  • “Propensity Model = None (priority-based)” — leaving this unset routes the engine to priority-based scoring even when method is formula or propensity. This is the safety default for fresh tenants with no models; once you have a trained model, point this dropdown at it so the P component carries real signal. If you see scores in the response that match priority/100 exactly, this is the cause.
The most reliable way to confirm what the engine is actually running is to read the flow’s publishedVersions[].configSnapshot.nodes[].config via GET /api/v1/decision-flows — the panel reflects whatever was last saved, but the engine reads the latest published version. See Lifecycle & publication.

Choosing a strategy — decision guide

QuestionStrategy
Just launching, no model, no history?priority_weighted
Have a trained model OR accumulated adaptation data?propensity
Need to balance likelihood with revenue or operator priority?formula
Different channels at different maturity?One base strategy + channelOverrides
Different categories want different tradeoffs?formula + strategyOverrides by category
Tuning aggressiveness without touching flows?formula + swap strategyProfileId
When in doubt, start with priority_weighted for the first week of traffic, switch to propensity once you have ≥ 50 interactions per offer, and graduate to formula once business stakeholders want to lean on revenue, fairness, or recency in the ranking.

Cold-start, smoothing, and the maturity ramp

Four engine behaviors guard against poor scores when evidence is thin or skewed:
  • Propensity smoothing (propensity and formula): when an offer has any adaptation evidence but below 50 interactions, the learned rate is blended with the category or global fallback using smoothingWeight (default 10, tunable per tenant via Settings.propensitySmoothingWeight).
  • Propensity score floor: even at high evidence (evidence ≥ 50), the propensity component is clamped to max(propScore, floor) so an offer with zero positive outcomes cannot score exactly zero. Default 0.05, tunable per tenant via Settings.propensityScoreFloor (clamped to [0, 0.5]). Without this floor, an offer that had been shown 50+ times without a single conversion would score 0, be eliminated by PRIE’s geometric mean (0^Wp = 0) and by the propensity multiplier (0 × fitMult = 0), and never receive another impression — a starvation failure mode that prevents the offer from ever proving itself. Set to 0 if you want classical bandit-style elimination; raise it to 0.10 or 0.15 if you want a stronger exploration tail.
  • Maturity ramp: a new offer is intentionally throttled — early scores are scaled down based on how few interactions the model has seen. Threshold tunable per tenant via Settings.modelMaturityThreshold (default 100). Applies only to propensity and formula.
  • Ranking influencers: positive outcomes against an offer’s category nudge sibling-offer scores up; negative outcomes nudge them down. Toggle per tenant via Settings.rankingInfluencersEnabled (default true).
These mechanisms mean two operationally-identical flows configured with different strategies can also produce different scores at different points in the lifecycle of an offer — the same Score node will pick a fresh offer less often than a mature one, even with identical configuration, until the maturity threshold is crossed.

Why the floor exists — the starvation failure mode

When an offer accumulates 50+ outcomes that are all negative (e.g. it was shown during testing but never received a convert outcome), the learned positive rate is 0 / N = 0. Without the floor:
  • propensity strategy: candidate.score = 0 × fitMult = 0 → offer drops to last in ranking, Rank top-N drops it, Group never picks it for any placement.
  • formula strategy: Math.pow(0, Wp) = 0 (or 1e-6^Wp ≈ 0.001 with the existing 1e-6 clamp) → score collapses to a near-zero value, same effective elimination.
Once the offer is permanently un-picked, it can never earn a positive outcome that would unlock it — the negative-only evidence becomes self-perpetuating. The floor guarantees a small exploration tail so a starved offer can be re-tested occasionally, and a single positive outcome lifts it out of the floor naturally.

Observing the strategy in decision traces

Every Recommend response with trace: true (or audited via /api/v1/decision-traces) records the active strategy used, the resolved model key, and — for formula — the four component values per candidate. Use this to verify that a strategy override fired as expected. See Decision Traces API.