Quick Start
Track your first AI interaction in under 5 minutes.
Prerequisites
- An Atheon Gateway Account.
- A Project API Key (get it from the Project Settings page).
Step 1: Install
pip install atheon-codex
Step 2: Initialise & Track (Backend)
Call atheon.init() once at startup, then atheon.track() after every LLM call. Both are non-blocking — your API response time is unaffected.
import os
import atheon
# Call once at application startup
atheon.init(os.environ["ATHEON_API_KEY"])
# Inside your chat handler
user_query = "How do I center a div?"
llm_output = "You can use Flexbox..."
interaction_id = atheon.track(
provider="openai",
model_name="gpt-4o",
input=user_query,
output=llm_output,
tokens_input=14,
tokens_output=32,
finish_reason="stop",
)
# Return the interaction_id to your frontend
return {
"reply": llm_output,
"interaction_id": str(interaction_id),
}
Step 3: Render (Frontend)
Load the Atheon script and wrap your output in <atheon-container>, passing the interaction_id returned by your backend.
<script
data-atheon-publisher-key="YOUR_PUBLISHER_KEY"
src="https://js.atheon.ad/atheon.js"
defer
></script>
<atheon-container id="chat-bubble">
<div id="text-content"></div>
</atheon-container>
<script>
const { reply, interaction_id } = backendResponse;
document.getElementById('text-content').innerText = reply;
document.getElementById('chat-bubble').setAttribute('interaction-id', interaction_id);
</script>
Step 4: Shut Down Gracefully
Call atheon.shutdown() when your process exits to flush any remaining queued events.
atheon.shutdown()
That's it — your interaction is now tracked, fingerprinted, and visible in the Atheon Dashboard.
Next Steps
- Streaming & multi-turn flows → use
atheon.begin()/interaction.finish()so latency is measured automatically. See Server-Side Integration → - Tool & agent tracking → use
@atheon.tooland@atheon.agentdecorators. See Server-Side Integration → - Async frameworks (FastAPI, etc.) → use
atheon.async_init()andatheon.async_track(). See Server-Side Integration →