Skip to content

Architecture

Understanding how Atheon differs from traditional analytics tools is key to unlocking its full potential.

The Blind Spot of Traditional Analytics

Standard analytics tools (like Google Analytics or Mixpanel) track clicks and pageviews from the user's browser. They were built for static websites, which means:

  1. They have zero visibility into the actual conversational intent of your users.
  2. They cannot track backend AI Agent performance (latency, token usage, tool calls, success rates).
  3. Attempting to scrape DOM elements to understand AI responses risks exposing raw PII and sensitive chat logs.

The Atheon Solution: Server-Side Decisioning

Atheon splits the responsibility between your server and the client to provide deep insights without interfering with live requests.

flowchart TD
    User[User] -->|Asks a question| Frontend[Frontend]

    Frontend -->|User Query| Backend[Backend]
    Backend -->|provider, model, input, output, tokens| AtheonSDK[Atheon Codex SDK]

    AtheonSDK -->|Intent Fingerprinting & Agent Analytics| Dashboard[Atheon Gateway Dashboard]

    Backend -->|interaction_id| Frontend

    Frontend -->|Wrapped in atheon-container| Container[Atheon Container]
    Container -->|Drop-offs & Journey Tracking| Analytics[Business Intelligence]

1. The Logic Layer (Server-Side)

When an LLM interaction occurs, your backend instruments it using the Codex SDK. Events are enqueued immediately and flushed to the Gateway in a background thread — your response time is never affected.

  • Fire-and-forget tracking: atheon.track() enqueues a completed interaction in microseconds. Use atheon.begin() / interaction.finish() for streaming or multi-turn flows where latency is measured automatically.
  • Tool & Agent tracking: The @atheon.tool and @atheon.agent decorators hook automatically into the active interaction via Python's ContextVar — no manual plumbing needed.
  • Intent Fingerprinting: The Gateway analyzes the interaction and extracts rich metadata (Persona, Context, Problem), building your Knowledge Graph without storing raw, sensitive logs.

2. The Presentation Layer (Client-Side)

Your backend returns the interaction_id alongside the LLM response. The frontend passes it to the <atheon-container> web component — there is no heavy analytics script blocking the thread.

  • Business Intelligence: By wrapping your UI in <atheon-container interaction-id="...">, Atheon securely tracks user journey funnels, drop-off reasons, and actual viewability. It bridges the gap between what the backend generated and how the user actually engaged with it.

Why this matters

  • Privacy First: By using metadata and Intent Fingerprinting, you understand your users perfectly without ever exposing sensitive chat histories.
  • Non-blocking: The background queue means analytics never add latency to your API responses.
  • Full Context: You correlate frontend retention cohorts directly with backend prompt logic, token usage, tool call patterns, and AI agent success rates.