Skip to main content

Overview

Trace Forwarding lets your organization push OpenTelemetry (OTLP) trace data from Elementum environments to external platforms — observability tools, AI evaluation suites, or any system that accepts OTLP over gRPC or HTTP. You can configure multiple endpoints per channel to route traces to different teams or platforms in parallel. Go to Settings icon Organization Settings > Platform > Trace Forwarding. Only Organization Administrators can view and modify these settings.

Trace channels

Two independent channels are available:
ChannelWhat it forwardsDefault protocolExample destinations
General TracesOperational telemetry from all Elementum activitiesgRPCDatadog, Grafana, Jaeger, Honeycomb
GenAI TracesDetailed LLM and agent spans from AI-powered operationsHTTPWeave (W&B), LangSmith, Arize

Quick start: add your first endpoint

  1. Go to Settings icon Organization Settings > Platform > Trace Forwarding.
  2. Click Add General Endpoint or Add GenAI Endpoint depending on the type of traces you want to forward.
  3. Enter a Name, your Endpoint URL, and confirm the Protocol matches what your destination platform expects.
  4. Select an Authorization Type and fill in the required credentials.
  5. Optionally, add Resource Attributes to tag every span with metadata such as environment or team.
  6. Click Save & Test. A green Message delivered result confirms the endpoint is reachable.
  7. Click Enable to activate the endpoint.
  8. Go to Organization Settings > Platform > Environments, click Edit on an environment card, select your endpoint under Trace Forwarding, and click Save Changes.

Add or edit an endpoint

Click Add General Endpoint or Add GenAI Endpoint to open the endpoint dialog. To edit an existing endpoint, click the pencil icon in its row. Both actions open the same dialog.

Connection

FieldRequiredDetails
NameYesA human-readable label used to identify the endpoint in the list.
Endpoint URLYesThe full URL of your OTLP receiver. Must be a valid HTTP or HTTPS URL.
ProtocolYesgRPC (binary, efficient) or HTTP (REST-based). Defaults to gRPC for General Traces and HTTP for GenAI Traces.

Authentication

Select one of four options from the Authorization Type dropdown:
TypeBehavior
NoneNo authentication headers are sent. Use only for internal or open endpoints.
Bearer TokenSends an Authorization: <scheme> <token> header. Requires a Bearer Token; Bearer Scheme is optional and defaults to Bearer.
Basic AuthSends an Authorization: Basic <base64> header. Requires Username and Password.
Custom HeadersSends one or more arbitrary request headers. Add as many Header Key / Header Value pairs as needed.
Secret values (tokens, passwords, header values) are encrypted at rest. When you reopen an endpoint for editing, existing secrets are masked as ****. Click Overwrite to replace a secret, or leave it masked to keep the current value.

Optional fields

  • Additional headers — Extra HTTP headers attached to every trace export request beyond those required for authentication. Click + Add Header to add key-value pairs.
  • Resource attributes — Key-value pairs added to the OTLP resource on every exported span. Use these to tag trace data with environment, team, or deployment metadata that your external platform can filter on (e.g. environment = production, team = ai-platform).
Click Save & Test to save the endpoint and verify connectivity immediately. General Trace endpoints also offer a Save option that skips the connection test. GenAI endpoints require Save & Test — connectivity must be verified before the endpoint is saved.

Manage endpoints

Test a connection

A connection test sends a sample OTLP trace to the configured endpoint. You can run a test two ways:
  • Click Save & Test when creating or editing an endpoint.
  • Click the send icon (Test Connection) in the endpoint list row.
The test times out after 30 seconds. A result modal shows latency, HTTP status code, trace ID, and span ID on success, or an error message and guidance on failure. From the modal you can Edit Connection, toggle Enable / Disable, or Cancel to close.

Assign to an environment

After configuring and enabling an endpoint, assign it to one or more environments. Traces are only forwarded from environments where an endpoint is assigned.
  1. Go to Settings icon Organization Settings > Platform > Environments.
  2. Find the environment you want to forward traces from and click Edit on its card.
  3. In the Trace Forwarding section, select your configured endpoint from the list.
  4. Click Save Changes.
Repeat for each environment you want to forward traces from.

Delete an endpoint

  1. Click the trash icon in the endpoint list row.
  2. Confirm the deletion in the dialog that appears.

Troubleshooting

The credentials are invalid or expired. Re-enter the token or password and ensure the API key has the correct scopes.
The endpoint URL is likely incorrect. Double-check the URL and, for HTTP endpoints, confirm the path includes /v1/traces or the equivalent path required by your platform.
A network or firewall issue is preventing the connection. Verify the endpoint host is reachable from the Elementum backend and check any applicable firewall rules.
Check that the endpoint is Enabled and assigned to the environment you are testing from. A passing test only confirms connectivity, not that traces are being produced — trigger an AI operation in Elementum to generate spans.
The feature flag is not enabled for your organization. Contact your Elementum account team to enable the AiTelemetryTableForwarding feature flag.

GenAI attribute reference

LLM call spans

These attributes appear on spans representing a single call to a language model. Span names follow the pattern chat <model>.
AttributeTypeDescription
gen_ai.systemstringThe LLM provider (e.g. aws.bedrock, openai, anthropic).
gen_ai.request.modelstringThe model name that was requested.
gen_ai.response.modelstringThe model name that actually responded.
gen_ai.usage.input_tokensintegerNumber of tokens in the prompt.
gen_ai.usage.output_tokensintegerNumber of tokens in the completion.
gen_ai.promptstring (JSON)Input messages as a JSON array of {role, content} objects.
gen_ai.completionstring (JSON)Output messages as a JSON array of {role, content} objects.
gen_ai.operation.namestringAlways "chat" for LLM call spans.
gen_ai.response.finish_reasonsstring[]Why the model stopped generating (e.g. ["stop"]).
gen_ai.response.idstringThe provider’s unique response identifier.

Tool execution spans

These attributes appear on spans representing an agent executing a tool. Span names follow the pattern execute_tool <tool_name>.
AttributeTypeDescription
gen_ai.tool.namestringThe function name of the tool that was called.
gen_ai.tool.call.idstringLinks this execution back to the LLM’s tool call request.
gen_ai.operation.namestringAlways "execute_tool" for tool execution spans.

Span tree structure

Traces from Elementum AI operations are organized into a span hierarchy:
Interaction typeSpan structure
Single LLM callroot → chat
Tool-calling agent (one round)root → chat → tool → chat
Multi-step agent (multiple rounds)root → chat → tool → chat → tool → chat → …