What are Observability Providers?
Observability providers connect Noxus to external tracing backends so you can inspect every LLM call made by your flows and agents — inputs, outputs, token usage, latency, fallback routing, and human feedback scores — all in one place.Noxus instruments all AI activity using OpenTelemetry and the OpenInference
semantic conventions. Any backend that speaks OTLP receives rich, structured
traces without additional configuration.
Supported Backends
- Arize Phoenix
- Langfuse
- Generic OpenTelemetry
Open-source LLM observability — ideal for self-hosted deployments or teams that want full control over their trace data.
- Self-hostable or available as Arize Phoenix Cloud
- Rich UI for trace inspection, span timelines, and prompt analysis
- Supports human feedback annotations directly from the Noxus chat interface
- Authentication via Bearer token (optional for local instances)
Connecting a Provider
Open provider settings
Navigate to Organization Settings > Providers and click Add Provider. Select the observability category.
Enter connection details
Fill in the endpoint URL, project name, and any required credentials. Noxus tests the connection by sending a probe span before saving.
The endpoint must be the full OTLP path — Noxus does not append
/v1/traces automatically. Double-check your backend’s documentation for the exact ingest URL.What Gets Traced
Every AI execution in Noxus produces a structured trace with the following spans:| Span | Description |
|---|---|
| Agent | Top-level span for the entire node execution. Captures the full prompt and final response. |
| LLM call | One span per model invocation, including the model ID used, token counts, and latency. |
| Fallback routing | Recorded when the platform falls back from one model to another within a preset. |
| Cache hit | Recorded when a cached response is served, with the cache key and saved latency. |
| Tool call | Spans for each tool or function call made during an agent turn. |
Human Feedback Scoring
For Phoenix and Langfuse providers, users can submit thumbs-up / thumbs-down feedback on agent responses directly from the Noxus chat interface. Feedback is sent to the backend as a span annotation or score linked to the originating trace, letting you filter and analyze low-quality responses in your observability tool.Multiple Providers
You can connect more than one observability provider at the same time. Noxus fans out every trace to all active backends in parallel — each backend receives an identical copy of the span data. This is useful for:- Sending traces to both a self-hosted Phoenix instance and a team Langfuse project
- Running a new backend alongside an existing one during evaluation
- Separating production and staging observability into different projects on the same backend