Skip to main content

What are Observability Providers?

Observability providers connect Noxus to external tracing backends so you can inspect every LLM call made by your flows and agents — inputs, outputs, token usage, latency, fallback routing, and human feedback scores — all in one place.
Noxus instruments all AI activity using OpenTelemetry and the OpenInference semantic conventions. Any backend that speaks OTLP receives rich, structured traces without additional configuration.
Each provider you connect receives a full copy of every trace. You can connect multiple backends simultaneously — useful for routing to both an internal Phoenix instance and a team Langfuse project at the same time.

Supported Backends

Open-source LLM observability — ideal for self-hosted deployments or teams that want full control over their trace data.
  • Self-hostable or available as Arize Phoenix Cloud
  • Rich UI for trace inspection, span timelines, and prompt analysis
  • Supports human feedback annotations directly from the Noxus chat interface
  • Authentication via Bearer token (optional for local instances)

Connecting a Provider

1

Open provider settings

Navigate to Organization Settings > Providers and click Add Provider. Select the observability category.
2

Choose your backend

Pick Phoenix, Langfuse, or OpenTelemetry from the provider list.
3

Enter connection details

Fill in the endpoint URL, project name, and any required credentials. Noxus tests the connection by sending a probe span before saving.
4

Activate

Once the connection test passes, the provider is marked active and tracing begins immediately for all new flow and agent executions.
The endpoint must be the full OTLP path — Noxus does not append /v1/traces automatically. Double-check your backend’s documentation for the exact ingest URL.

What Gets Traced

Every AI execution in Noxus produces a structured trace with the following spans:
SpanDescription
AgentTop-level span for the entire node execution. Captures the full prompt and final response.
LLM callOne span per model invocation, including the model ID used, token counts, and latency.
Fallback routingRecorded when the platform falls back from one model to another within a preset.
Cache hitRecorded when a cached response is served, with the cache key and saved latency.
Tool callSpans for each tool or function call made during an agent turn.
All spans follow the OpenInference semantic conventions, making them compatible with any OpenInference-aware UI.

Human Feedback Scoring

For Phoenix and Langfuse providers, users can submit thumbs-up / thumbs-down feedback on agent responses directly from the Noxus chat interface. Feedback is sent to the backend as a span annotation or score linked to the originating trace, letting you filter and analyze low-quality responses in your observability tool.

Multiple Providers

You can connect more than one observability provider at the same time. Noxus fans out every trace to all active backends in parallel — each backend receives an identical copy of the span data. This is useful for:
  • Sending traces to both a self-hosted Phoenix instance and a team Langfuse project
  • Running a new backend alongside an existing one during evaluation
  • Separating production and staging observability into different projects on the same backend