Overview
Trace Forwarding lets your organization push OpenTelemetry (OTLP) trace data from Elementum environments to external platforms — observability tools, AI evaluation suites, or any system that accepts OTLP over gRPC or HTTP. You can configure multiple endpoints per channel to route traces to different teams or platforms in parallel. Go toTrace channels
Two independent channels are available:| Channel | What it forwards | Default protocol | Example destinations |
|---|---|---|---|
| General Traces | Operational telemetry from all Elementum activities | gRPC | Datadog, Grafana, Jaeger, Honeycomb |
| GenAI Traces | Detailed LLM and agent spans from AI-powered operations | HTTP | Weave (W&B), LangSmith, Arize |
Quick start: add your first endpoint
- Go to
Organization Settings > Platform > Trace Forwarding.
- Click Add General Endpoint or Add GenAI Endpoint depending on the type of traces you want to forward.
- Enter a Name, your Endpoint URL, and confirm the Protocol matches what your destination platform expects.
- Select an Authorization Type and fill in the required credentials.
- Optionally, add Resource Attributes to tag every span with metadata such as
environmentorteam. - Click Save & Test. A green Message delivered result confirms the endpoint is reachable.
- Click Enable to activate the endpoint.
- Go to Organization Settings > Platform > Environments, click Edit on an environment card, select your endpoint under Trace Forwarding, and click Save Changes.
Add or edit an endpoint
Click Add General Endpoint or Add GenAI Endpoint to open the endpoint dialog. To edit an existing endpoint, click the pencil icon in its row. Both actions open the same dialog.Connection
| Field | Required | Details |
|---|---|---|
| Name | Yes | A human-readable label used to identify the endpoint in the list. |
| Endpoint URL | Yes | The full URL of your OTLP receiver. Must be a valid HTTP or HTTPS URL. |
| Protocol | Yes | gRPC (binary, efficient) or HTTP (REST-based). Defaults to gRPC for General Traces and HTTP for GenAI Traces. |
Authentication
Select one of four options from the Authorization Type dropdown:| Type | Behavior |
|---|---|
| None | No authentication headers are sent. Use only for internal or open endpoints. |
| Bearer Token | Sends an Authorization: <scheme> <token> header. Requires a Bearer Token; Bearer Scheme is optional and defaults to Bearer. |
| Basic Auth | Sends an Authorization: Basic <base64> header. Requires Username and Password. |
| Custom Headers | Sends one or more arbitrary request headers. Add as many Header Key / Header Value pairs as needed. |
Secret values (tokens, passwords, header values) are encrypted at rest. When you reopen an endpoint for editing, existing secrets are masked as
****. Click Overwrite to replace a secret, or leave it masked to keep the current value.Optional fields
- Additional headers — Extra HTTP headers attached to every trace export request beyond those required for authentication. Click + Add Header to add key-value pairs.
- Resource attributes — Key-value pairs added to the OTLP resource on every exported span. Use these to tag trace data with environment, team, or deployment metadata that your external platform can filter on (e.g.
environment = production,team = ai-platform).
Manage endpoints
Test a connection
A connection test sends a sample OTLP trace to the configured endpoint. You can run a test two ways:- Click Save & Test when creating or editing an endpoint.
- Click the send icon (Test Connection) in the endpoint list row.
Assign to an environment
After configuring and enabling an endpoint, assign it to one or more environments. Traces are only forwarded from environments where an endpoint is assigned.- Go to
Organization Settings > Platform > Environments.
- Find the environment you want to forward traces from and click Edit on its card.
- In the Trace Forwarding section, select your configured endpoint from the list.
- Click Save Changes.
Delete an endpoint
- Click the trash icon in the endpoint list row.
- Confirm the deletion in the dialog that appears.
Troubleshooting
Test returns 401 or 403
Test returns 401 or 403
The credentials are invalid or expired. Re-enter the token or password and ensure the API key has the correct scopes.
Test returns 404
Test returns 404
The endpoint URL is likely incorrect. Double-check the URL and, for HTTP endpoints, confirm the path includes
/v1/traces or the equivalent path required by your platform.Test times out after 30 seconds
Test times out after 30 seconds
A network or firewall issue is preventing the connection. Verify the endpoint host is reachable from the Elementum backend and check any applicable firewall rules.
Test passes but no data appears in the platform
Test passes but no data appears in the platform
Check that the endpoint is Enabled and assigned to the environment you are testing from. A passing test only confirms connectivity, not that traces are being produced — trigger an AI operation in Elementum to generate spans.
Trace Forwarding is not visible in the sidebar
Trace Forwarding is not visible in the sidebar
GenAI attribute reference
View OpenTelemetry semantic conventions used in Elementum AI spans
View OpenTelemetry semantic conventions used in Elementum AI spans
LLM call spans
These attributes appear on spans representing a single call to a language model. Span names follow the patternchat <model>.| Attribute | Type | Description |
|---|---|---|
gen_ai.system | string | The LLM provider (e.g. aws.bedrock, openai, anthropic). |
gen_ai.request.model | string | The model name that was requested. |
gen_ai.response.model | string | The model name that actually responded. |
gen_ai.usage.input_tokens | integer | Number of tokens in the prompt. |
gen_ai.usage.output_tokens | integer | Number of tokens in the completion. |
gen_ai.prompt | string (JSON) | Input messages as a JSON array of {role, content} objects. |
gen_ai.completion | string (JSON) | Output messages as a JSON array of {role, content} objects. |
gen_ai.operation.name | string | Always "chat" for LLM call spans. |
gen_ai.response.finish_reasons | string[] | Why the model stopped generating (e.g. ["stop"]). |
gen_ai.response.id | string | The provider’s unique response identifier. |
Tool execution spans
These attributes appear on spans representing an agent executing a tool. Span names follow the patternexecute_tool <tool_name>.| Attribute | Type | Description |
|---|---|---|
gen_ai.tool.name | string | The function name of the tool that was called. |
gen_ai.tool.call.id | string | Links this execution back to the LLM’s tool call request. |
gen_ai.operation.name | string | Always "execute_tool" for tool execution spans. |
Span tree structure
Traces from Elementum AI operations are organized into a span hierarchy:| Interaction type | Span structure |
|---|---|
| Single LLM call | root → chat |
| Tool-calling agent (one round) | root → chat → tool → chat |
| Multi-step agent (multiple rounds) | root → chat → tool → chat → tool → chat → … |