Skip to main content
AI in Elementum provides programmatic reasoning, data extraction, and content generation within your workflows. Configure reusable Agents for multi-step processes or lightweight AI actions for specific tasks like classification, summarization, and field extraction.

Two Ways to Add Intelligence

Common Use Cases

Support Triage Agent

Conversational agent that answers FAQs, gathers context, and triages requests via chat or phone. Creates/updates records, kicks off workflows, and escalates to L2 with a structured handoff when needed.

AP Vendor Outreach

Accounts Payable agent emails vendors for missing documents (W-9, PO, invoice details), validates responses, updates Element fields, and advances the approval workflow automatically.

Triage & Routing

Classify incoming requests, detect intent, and assign to the right team with confidence scores.

Information Extraction

Pull structured fields from unstructured content (emails, PDFs, log files) into Elements.

Summarization & Drafting

Generate summaries, replies, or knowledge base entries with human approval steps.

Search & Reasoning

Retrieve relevant context and reason over it to propose next steps or detect anomalies.

Enterprise AI Orchestration

The key to successful AI implementation is embedding non-deterministic AI capabilities within deterministic workflow structures. This approach enables reliable, auditable, and scalable AI deployment.
Core principle: Use deterministic workflows to contain AI uncertainty, ensuring predictable business outcomes regardless of AI model variability.

Governance & Controls

Workflow Boundaries

Define clear input/output contracts and validation rules that AI actions must respect.

Security & Permissions

Respect roles and data access controls when reading or writing records.

Transparency

Log prompts, context, and outputs for auditability and improvement.

Human Oversight

Require approvals for high-impact actions; build review queues into your flow.

Provider Choice

Use different providers and models (OpenAI, Gemini, Cortex, etc.) based on data sensitivity, cost, and latency.

Compliance Assurance

Built-in audit trails, approval workflows, and policy enforcement for regulatory requirements.

Orchestration Patterns

Embed AI actions within predefined workflow paths with known outcomes and fallback procedures.Pattern: Input validation → AI processing → Confidence evaluation → Route based on score → Human review if needed → Execute actionBenefits: Predictable outcomes, risk mitigation, compliance assurance
Coordinate multiple AI agents with different specializations within a single business process.Example: Document processing agent extracts data → Classification agent categorizes → Approval agent routes to appropriate reviewerImplementation: Use workflow variables to pass context between agents, maintain audit trail of all agent interactions
Design workflows where AI handles routine processing while humans focus on exceptions and strategic decisions.Approach: AI processes high-confidence cases automatically, routes uncertain cases to human review with full context and recommendationsKey: Provide humans with AI reasoning and confidence scores to make informed decisions

Confidence Scoring & Error Handling

AI actions return confidence scores that determine workflow routing and human oversight requirements.
Configure score ranges to automatically route AI outputs based on reliability.High confidence (90-100%): Auto-approve and execute actions
Medium confidence (70-89%): Route to human review queue
Low confidence (0-69%): Escalate to exception handling or manual processing
Set thresholds based on business impact—use higher thresholds for financial or compliance-critical actions.
Handle AI failures with fallback actions and retry policies.Timeout handling: Set maximum execution time for AI actions
Retry logic: Configure retry attempts for transient failures
Fallback routing: Define manual processes when AI is unavailable
Example Workflow:
Invoice Upload → AI Classification Action
├─ Confidence ≥ 90% → Auto-approve and process
├─ Confidence 70-89% → Route to human review queue  
└─ Confidence < 70% → Escalate to manual processing

Configuration:
• Timeout: 30 seconds
• Retry attempts: 3
• Fallback: Manual classification workflow

Implementation Patterns

Configure structured data transfer between AI and human steps in workflows.Context preservation: Store reasoning steps, input data, and confidence scores
Escalation triggers: Define when to route to human review or alternative processing
Handoff format: Standardize data structure for consistent processing
Implementation: Use workflow variables to pass AI outputs and metadata to subsequent steps
Integrate lightweight AI processing within automation rules for real-time decisions.Event-driven: Trigger AI actions on field changes, record creation, or scheduled intervals
Bounded execution: Set timeouts and resource limits to prevent runaway processes
Idempotent design: Ensure repeated executions produce consistent results
AI actions within automations should be fast-executing (under 30 seconds) to avoid workflow delays.
Handle multi-step AI processes that require external data or human input.State management: Persist workflow state between request and response cycles
Timeout handling: Set deadlines for responses with escalation paths
Validation: Verify response format and content before proceeding
Use case: Agent requests missing information via email, resumes processing when received
Allow AI agents to interact with external systems and platform APIs.Permission controls: Restrict tool access based on agent role and data sensitivity
Audit logging: Track all tool usage for compliance and debugging
Rate limiting: Prevent excessive API calls and resource consumption
Available tools: Database queries, HTTP requests, file operations, notification sending
Enhance AI responses by retrieving relevant data before generation.Data sources: Search across records, documents, and external knowledge bases
Context filtering: Apply permissions and relevance scoring to retrieved data
Caching strategy: Store frequently accessed context to improve response times
Configuration: Define search scope, relevance thresholds, and context window limits

Technical Implementation

Design effective prompts for consistent AI behavior and output formatting.Structure: Use system prompts for behavior, user prompts for specific tasks
Output format: Specify JSON schemas or structured formats for parsing
Examples: Include few-shot examples for complex classification tasks
Best practices: Keep prompts concise, test with edge cases, version control changes
Choose appropriate AI models based on task requirements and constraints.Factors: Latency requirements, data sensitivity, cost constraints, accuracy needs
Provider options: OpenAI (general purpose), Gemini (multimodal), Snowflake Cortex (data residency)
Model types: Classification models for routing, generation models for content creation
Start with general-purpose models, then optimize for specific use cases based on performance metrics.
Track AI action performance and reliability over time.Metrics: Response time, confidence scores, success/failure rates, user feedback
Alerting: Set thresholds for performance degradation or high failure rates
Optimization: A/B test different prompts and models based on outcomes

Next Steps

I