Adaptive AI · Buildable on frontier models today

Software should learn the human, not the other way around.

The interface reshapes itself to who the user is, what they are doing, and what they need next. One product. Infinite shapes. A new contract between software and the people who use it.

25+ yrs Combined design & AI experience
5 personas Watch this very page adapt to each
9 industries Mapped use cases & way-forwards
The adaptation loop

A continuous loop, not a static design.

The system observes, infers, reshapes, and refines — in real time, every session, for every user. Every interaction is signal. Every signal makes the next interaction better.

model CORE 01 Observe signals · context · intent 02 Infer who · what · what-next 03 Reshape layout · copy · depth 04 Refine feedback · reward · drift
01

Observe

Behavioural signals, telemetry, role hints, current task context. No spying; just thoughtful instrumentation.

02

Infer

A small frontier-model loop turns signals into a live representation of who you are and what you need next.

03

Reshape

Layout, content depth, copy, primary actions, even pricing — rendered for this person, this moment.

04

Refine

Every interaction trains the next render. The product gets sharper as it learns the user, the cohort, the world.

This page just adapted to you

Read the same idea, your way.

We are practising what we preach. The section below changes its content, depth, and examples based on the persona you select above. Try each one.

Adapting for · Engineer You are here for the architecture. Let us go deep.

The stack we actually ship on

Adaptive AI is not a single model call. It is a small, tight pipeline of cheap classifiers, a long-context frontier model, a deterministic renderer, and a feedback bus. The render layer is the surprise — most teams over-invest in model choice and under-invest in the adaptation surface.

A representative architecture:

  • Signal layer: client SDK, server events, role hints, account graph — emits a typed event stream.
  • Inference layer: Claude Sonnet / Opus or GPT-class for reasoning; smaller models (Haiku, Mistral) for fast classification; vector store for memory.
  • Schema layer: the LLM never emits HTML. It emits a typed JSON schema your renderer trusts. Validate with Zod / Pydantic.
  • Render layer: a component library where every component declares which props it accepts. The LLM composes; your code renders.
  • Reward layer: click-throughs, completion, dwell, explicit thumbs — piped back to refine the prompt / fine-tune.
// Adaptive surface contract
type AdaptiveRender = {
  persona: 'coder'|'pm'|'founder'|'student'|'researcher';
  intent: 'learn'|'evaluate'|'build'|'buy';
  density: 'sparse'|'normal'|'dense';
  sections: Section[];
};

type Section = {
  kind: 'hero'|'code'|'diagram'|'usecases'|'pricing';
  depth: 1|2|3|4|5;        // 1=overview, 5=research
  props: Record<string, unknown>;
};

// The model returns a Section[], not markup.
// Your renderer maps kinds → components.
// Determinism stays in your code.

Model choice ≠ moat

The moat is in your event taxonomy, your component library, and your reward loop. Models swap; surfaces compound.

Latency budget

Target <800ms for visible adaptation. Cache the inference, stream the render, and reserve frontier calls for moments that matter.

Eval, eval, eval

Adaptive UIs without offline evals drift quietly. Snapshot persona × intent × depth and replay before every prompt change.

Adapting for · Product Manager Adaptive UX is a roadmap, not a feature. Here is the sequence.

From static product to adaptive product, in four investments

Most PMs treat personalisation as recommendations on a static shell. Adaptive AI is the next layer: the shell itself becomes a function of the user. You will not get there in one sprint, but the staging is clearer than it looks.

Stage 01

Surface the user model

Build a live user-context object: role, intent, journey stage, task graph. Make it observable. Show it in the UI — even debug-only. If you cannot describe your user in one JSON, you cannot adapt to them.

Stage 02

Componentise depth

Every screen gets a "depth dial." Same content, three depths: overview, working, expert. Start with two surfaces (onboarding, primary dashboard) before going wider.

Stage 03

Move from rules to inference

Rules-engine personalisation maxes out at 5–7 dimensions. An LLM lets you adapt on hundreds of soft signals at once. Replace your rules engine with a contract: signals in, render-schema out.

Stage 04

Close the feedback loop

Every adapted render emits an outcome. Wire it back. The product gets sharper with usage instead of decaying under feature debt.

Metrics that move when you do this right

↑ Activation

First-meaningful-action rate, because the right action is now the obvious one.

↓ Time-to-value

No more 9-step tours. The product opens at the step the user actually needs.

↑ Depth of use

Power features surface to power users; novices never see what would scare them.

↓ Support tickets

Most tickets are UI mismatch tickets. Adaptive UI removes the mismatch.

Adapting for · Founder Why adaptive design is the next moat — and a closing window.

The competitive thesis

Every category leader of the last cycle won on a single insight: remove a step the user used to take. Search removed the directory. Uber removed the dispatcher. Stripe removed the integration. Adaptive AI removes the step that has been hiding in plain sight for forty years — the user learning the product.

That step is invisible because we built our entire industry around it — tutorials, tours, onboarding, docs, support, training, certification. Strip them away, and the product itself has to do the work. Whoever ships that first in your category wins the next five years.

What it changes for your business

Lower CAC payback

Activation goes up. Refunds go down. Each acquired user is more valuable on day 7.

Pricing power

When the product proves its worth in the first session, you can charge for outcomes, not seats.

Defensible data

Your adaptation telemetry becomes the asset competitors cannot replicate. They have prompts; you have priors.

The strategic window

low high TIME → DIFFERENTIATION window of opportunity 2025 — 2027 you are here static SaaS table-stakes
Adapting for · Student Start here. We will keep it concrete and friendly.

What is Adaptive AI, in plain words?

Think of every app you have ever opened. The buttons were in the same place for everyone — a 70-year-old grandmother, a teenager, a surgeon, a software engineer. Everyone got the same screen, and everyone had to learn it. Adaptive AI flips that: the app looks at who you are and what you are trying to do, and changes itself to fit you.

It is the difference between a giant Swiss-Army knife with 50 tools sticking out, and a knife that quietly shows you only the blade you need right now.

Six examples you can picture immediately

🩺 A doctor's app

Opens to "today's surgeries" for a surgeon, "patient queue" for a GP, "lab results" for a pathologist — same app, three different first screens.

📚 A study app

Shows extra worked examples to a learner who got the last quiz wrong, and skips ahead to practice questions for the one who got it right.

🛒 A shopping app

For a first-time visitor, it explains sizes and returns. For a returning customer, it shows the colour they almost bought last week.

🏦 A banking app

For a salaried user, it leads with savings. For a small-business owner, it leads with invoicing. For a senior citizen, it makes text larger and removes clutter.

🎮 A game

Difficulty rises and falls based on how you are doing — not based on a setting you picked once and forgot.

🧭 A government portal

Asks two questions, then hides every form and scheme you do not qualify for. Suddenly "the system" is no longer something to fight.

How to think about it for a project or paper

The simplest experiment is to build the same screen twice for two different users, then ask: "what is the smallest signal that lets a model decide which version to show?" That question is the entire field in one sentence.

Adapting for · Researcher The depth dial is at maximum. Everything below is verbose by design.

A formal frame for Adaptive AI

We can define an adaptive interface as a function f: (U, T, C, H) → R, where U is a representation of the user (latent, partially observed), T is the task context, C is the broader environmental context (device, locale, time, cohort drift), H is the interaction history, and R is a typed rendering schema. Three properties matter:

1. Identifiability. The user representation U must be learnable from observable signals without collapsing distinct users into a single mode. Most static personalisation systems fail here because they operate on hand-coded segments. A latent embedding learned from task-conditioned behaviour (analogous to a tower in a two-tower recommender, but conditioned on intent rather than item) tends to preserve identifiability under cohort drift.

2. Stability under noise. The mapping f must not change drastically for small perturbations of U or T — otherwise the user perceives the interface as flickering and untrustworthy. We enforce this with a temporal smoothness regulariser on R: render schemas are penalised for high edit distance from the previous render unless an interaction event of sufficient weight has occurred. This is conceptually similar to trust-region updates in policy optimisation.

3. Counterfactual reasoning. Because we only ever observe outcomes for the render we shipped, every deployed adaptive system has a logged-policy / target-policy gap. We use inverse propensity scoring and small online exploration arms (Thompson-like sampling over render variants) to keep the gradient honest, but the bias-variance trade-off here is genuinely difficult and remains an open area.

Open problems we find most interesting

Specification gaming in render rewards

A model rewarded for engagement will, given enough degrees of freedom, learn to be sticky in ways the user did not endorse. Reward shaping for adaptive UI is a special case of the alignment problem with unusually fast feedback loops — which makes it both tractable and dangerous.

The taxonomy of user state

What is the right basis for the user-state vector? Role, mood, skill, intent, fatigue, recent context — these are not orthogonal. We are interested in self-supervised pre-training of user-state representations from task trajectories across products.

Cross-session memory & consent

Memory makes adaptation deeper but raises the bar on consent, retention, and right-to-forget. The technical and ethical frontiers here advance at different speeds; we think the right answer is local-first memory with explicit user-visible state.

Evaluation without ground truth

There is no "correct" interface for a user; only better and worse. Standard A/B tests are not enough — they reward short-horizon metrics. We are exploring multi-fidelity evaluation that combines offline replay, expert review, and small-cell long-horizon trials.

A reading list, opinionated

  • Programmes as User Interfaces — Brad Myers and others; the original argument that the UI should reason about the user, not the other way.
  • Recommender Systems Handbook — Ricci, Rokach, Shapira; the canonical reference for everything you should not redo from scratch.
  • RLHF and constitutional methods — Ouyang et al., Bai et al.; the modern reward-shaping toolkit and its limits.
  • Mixture-of-experts routing — useful as an architectural metaphor: the model picks the expert; the adaptive UI picks the render.
  • Human-AI complementarity — Bansal, Wu, et al.; what makes a human-AI team actually better than either alone.
Same product · three users

One codebase, infinite shapes.

The illustration below shows the same dashboard rendered for three roles. The components are identical. The composition is not.

EXECUTIVE REVENUE · MTD ₹4.2 cr ▲ 18% vs LM PIPELINE ₹12.8 cr RUNWAY 22 mo DECISION NEEDED Approve Q4 hiring plan → ANALYST DAILY REVENUE · 30d DATE REGION REVENUE Δ 14-05 North ₹18.2L +12% 14-05 South ₹22.4L +8% 14-05 West ₹14.1L −3% FIELD REP · MOBILE Next visit Patel Traders 2.3 km · 14 min Start → Log a visit Quick add · voice or photo Open orders 3 pending pickups OFFLINE-READY single high-stakes KPI · one decision surfaced information-dense · drillable · time-series first single next action · fat-finger sized · works offline
Beyond the basics

Want to go deeper?

The introduction above is free and unconstrained. Our advanced material — reference architectures, evaluation harnesses, component libraries, and the playbooks we have built for paying clients — sits behind a short conversation.

Advanced playbooks

A short call, a small engagement fee, and we share what is otherwise reserved for clients. We work with a handful of teams at a time, by design.

Start a conversation
Ready when you are

Your product, but it learns the people who use it.

Tell us what you are building. We will tell you the shortest path to an adaptive surface, whether you build it with us or alone.