# How It Works

### How It Works (Step-by-Step)

Questera’s personalization engine employs a sophisticated multi-agent approach, structured around dedicated AI agents, each assigned specific customer-engagement goals such as onboarding, retention, upsell, or reactivation.

1\. Specialized Goal-Based Agents

* Each agent specializes in achieving one critical marketing objective (e.g., onboarding, retention, upsell).
* They operate independently, continuously observing shared user data, user journeys, and campaign states in real-time.

2\. Shared Memory & Collaborative Decisioning

* Agents access and act upon a shared context and memory store:
* Real-time user behavior streams
* Historical interactions (opens, clicks, conversions)
* User lifecycle stage and intent data
* This common state allows agents to make informed, locally optimized decisions instantly.

3\. Dynamic Coordination (Scheduler Agent)

* A dedicated Scheduler Agent arbitrates decision-making when conflicts or resource constraints arise (e.g., only one email or push notification can be sent today).
* It prioritizes actions based on agent priority, user context, urgency, and potential impact—resolving contention swiftly.

4\. Critic Agent: Monitoring & Optimization

* A Critic Agent continuously evaluates the performance of the user journeys and agent decisions:
* Assesses effectiveness, identifies gaps, and detects patterns in successful or unsuccessful journeys.
* Suggests improvements to other agents (e.g., “Switch channels,” “Adjust messaging tone”).

5\. Planner Agent: Long-Term Journey Strategy

* A Planner Agent oversees strategic long-term goals for user journeys (e.g., defining a complete 14-day onboarding pathway).
* It ensures consistent user experiences aligned with broader business outcomes, guiding other agents with overarching strategic direction.

6\. Reflective Learning & Continuous Improvement

* Questera employs a reflection-based learning pattern:
* Regularly analyzes past outcomes (“Why did the last five retention attempts fail?”).
* Adjusts tactics, channels, and messaging based on learnings.
* Continuously refines strategies, enabling agents to learn and improve over time.

| Step                                 | What Happens                                 |
| ------------------------------------ | -------------------------------------------- |
| 1️⃣ User activity flows in           | Real-time product + event data is ingested   |
| 2️⃣ Agent analyzes the journey stage | Uses behavioral patterns + funnel position   |
| 3️⃣ Next best action is chosen       | Decides if/what message is needed now        |
| 4️⃣ Message is composed + sent       | Fully contextual, dynamic, and timed         |
| 5️⃣ Outcome is tracked + learned     | Success/failure loops back into future logic |

1\. Input Layer: Signals & State Awareness

* Ingests product, CRM, and behavior data
* Classifies users into journey stages (e.g., onboarding, drop-off, power user)
* Detects friction points, intent signals, and churn risks<br>

2\. Agent Collaboration Layer

| Agent Type         | Role in Orchestration                            |
| ------------------ | ------------------------------------------------ |
| Activation Agent   | Identifies drop-offs and deploys setup nudges    |
| Churn Agent        | Monitors disengagement, re-engages at-risk users |
| Expansion Agent    | Spots upsell moments, delivers upgrade prompts   |
| Reactivation Agent | Detects dormant users, personalizes winbacks     |
| Routing Agent      | Decides the best channel (email, in-app, SMS)    |
| Coordination Agent | Resolves conflicts and prioritizes actions       |

Each agent owns a segment of the journey but can “talk” to others. For example:

Churn Agent detects risk → asks Routing Agent for best channel → Expansion Agent pauses upsell temporarily.

3\. Execution Layer: Journey Orchestration

* Launches multi-step campaigns
* Adjusts content and channel mid-journey
* Dynamically updates based on user interaction

E.g., “User didn’t open email → resend via in-app”

4\. Feedback Loop & Learning

* Agents observe what worked
* Update scoring models, message templates, and journey logic
* Next actions are smarter by default

This is what makes it a compounding system — not just a static one.
