Skip to main content
The Run Agent Task automation action lets you transition between structured, deterministic automation steps and autonomous agent actions within a single workflow. This combines the reliability of automations with the adaptability of AI agents.
New to Automations? Check out the Automation System guide first to understand how event-driven workflows work in Elementum.

How It Works

Traditional automation excels at structured, deterministic processes: “When X happens, do Y.” AI agents excel at unstructured tasks requiring reasoning, judgment, and problem-solving. Run Agent Task combines both approaches in a single workflow:
  1. Start with structured automation (trigger detection, data gathering)
  2. Hand off to an autonomous agent (analysis, research, decision-making)
  3. Return to structured automation (use agent output in subsequent actions)
Record Updated (structured) →
  Run Agent Task (autonomous intelligence) →
  Update Record Fields (structured) →
  Send Email Notification (structured)
  • Excellent at deterministic tasks
  • Struggles with tasks requiring judgment
  • Can’t handle “figure it out” scenarios
  • Limited to predefined logic paths

Configure a Run Agent Task

To add a Run Agent Task to your workflow, open your Apps icon App, click Automations, and add the Run Agent Task action to an automation that already includes a trigger.

Action Name

Provide a descriptive name for the task within your automation workflow.
"Research Customer Industry"
"Evaluate Contract Risk"
"Analyze Support Ticket Complexity"

Object Selection

After naming the action, choose the Object the agent has access to. This determines which data the agent can read and act on when executing the task.

Agent Selection

Select an agent you’ve already built to execute this task. For more on creating and configuring agents, see Agent Architecture. After selecting the agent, click Configure Task and Test to proceed to the task definition.

Task Definition

The task definition tells the agent what to accomplish. It has three critical components: context, objective, and success criteria. Because agents in Run Agent Task operate in a headless environment — with no user available for follow-up questions — your task definition must be self-contained with all necessary context provided upfront through value references.
Every task definition should include:
  1. Context — All relevant data via value references
  2. Objective — What the agent should accomplish
  3. Success criteria — How the agent knows it has completed the task, including specific deliverables and format
Task: "Analyze the support ticket from {{customer.name}} regarding {{ticket.subject}}.
Customer tier: {{customer.tier}}, previous tickets: {{customer.past_tickets}}.

Objective: Determine issue complexity and routing.

Success: Provide:
- Complexity rating (Low/Medium/High)
- Estimated resolution time in hours
- Recommended team (General/Specialist/Engineering)
- 2-3 sentence reasoning for recommendations"

Output Type

Choose how the agent returns its work:
Returns a simple narrative response. Best for summaries, explanations, and recommendations where you don’t need to branch on specific values in downstream actions.

Testing and Error Handling

Before deploying, test your agent task using the Test & Preview feature:
  1. Fill in value references with actual data
  2. Run the agent task
  3. Verify the output format and quality
  4. Adjust task definition if needed
Test with multiple scenario types: typical cases, edge cases, ambiguous cases, and varying data quality. After deployment, review the first 10–20 runs manually, monitor the run_agent_task.success field, and refine your task definition based on real-world performance.
The system includes built-in retry logic (up to 3 attempts) when agents don’t provide correctly formatted structured output:
  1. Agent attempts to provide structured output
  2. If format is incorrect, system returns error to agent with details
  3. Agent tries again with error context
  4. Repeats up to 3 times
Always check run_agent_task.success before using agent output in downstream actions:
Run Agent Task

IF run_agent_task.success = true
  → Normal processing flow
OTHERWISE
  → Error handling:
    - Post Comment: "Agent task failed: {{run_agent_task.error_message}}"
    - Send Message to Teams: "@admins Agent task error"
    - Make Assignment: Route to manual review
Design workflows that remain functional even if the agent task fails:
Run Agent Task: Personalization analysis

IF run_agent_task.success = true
  → Send Email: Personalized message with {{run_agent_task.recommendations}}
OTHERWISE
  → Send Email: Standard template (still functional)

When to Use Run Agent Task

Ideal Use Cases

Use Run Agent Task when a step in your workflow requires reasoning, judgment, or synthesis of information.
Scenario: Tasks requiring information gathering and synthesis
New Lead Created → Run Agent Task
Task: "Research {{lead.company}} to identify:
- Industry and market position
- Recent news or developments
- Competitive landscape
- 3-5 key talking points for our sales team
Success: Provide actionable sales intelligence"
Output Type: Structured
Fields: industry, company_size, recent_news, talking_points, research_confidence
Scenario: Decisions requiring multiple factors and reasoning
Contract Uploaded → AI File Analysis → Run Agent Task
Task: "Evaluate this contract with terms {{contract.terms}} and value {{contract.value}}.
Consider our standard terms, risk tolerance, and relationship with {{customer.name}}.
Success: Provide risk assessment (Low/Medium/High), key concerns, and approval recommendation."
Output Type: Structured
Fields: risk_level, key_concerns, approval_recommended, negotiation_points
This pattern pairs well with AI File Reader for extracting structured data before agent evaluation. See the document review example in Workflow Examples below.
Scenario: Evaluating quality, completeness, or appropriateness of content
Application Submitted → Run Agent Task
Task: "Review this grant application from {{applicant.name}} for {{project.title}}.
Application content: {{application.content}}
Grant criteria: {{grant.criteria}}
Success: Evaluate completeness, alignment with criteria, and provide score (1-10) with feedback."
Output Type: Structured
Fields: completeness_score, criteria_alignment, overall_score, strengths, improvements_needed
Scenario: Enhancing records with synthesized information
Customer Record Created → Run Agent Task
Task: "Enrich data for {{customer.company}} in {{customer.industry}}.
Success: Provide company size estimate, key decision makers' typical titles,
common pain points in their industry, and recommended product fit."
Output Type: Structured
Fields: company_size_estimate, decision_maker_titles, industry_pain_points, product_recommendations
Scenario: Tasks requiring sequential reasoning and proactive action
Complex Issue Detected → Run Agent Task
Task: "Diagnose this system issue: {{issue.description}}
Recent changes: {{system.recent_changes}}
Error logs: {{system.errors}}
Success: Provide root cause analysis, step-by-step resolution plan, and prevention recommendations."
Output Type: Structured
Fields: root_cause, resolution_steps, estimated_fix_time, prevention_measures

When NOT to Use Run Agent Task

Use standard automation actions instead when:
  • Deterministic logic — Simple IF/THEN logic, calculations, or predefined rules. Use IF conditions or Run Calculation instead.
  • Direct data operations — Creating, updating, searching, or relating records with known values. Use Create Record, Update Record Fields, or Search Records instead.
  • Standard classifications — Categorization with clear, predefined categories. Use AI Classification instead.
  • API integrations — Direct calls to external systems with structured parameters. Use Send API Request instead.
Decision rule: Does this task require reasoning, judgment, or synthesis of information? If yes, consider Run Agent Task. If no, use standard automation actions.

Workflow Examples

These examples demonstrate the structured → agent → structured pattern in complete workflows. Each combines standard automation actions with Run Agent Task.
Scenario: Route support tickets based on nuanced assessment, not just keywords.
Support Email Received

AI Classification: Categorize ticket type

Search Records: Find customer

Find Related Records: Get customer's recent tickets and products

Run Agent Task: "Intelligent Ticket Assessment"
  Task: "Assess support ticket from {{customer.name}} about {{ticket.subject}}.
    Ticket content: {{ticket.body}}
    Customer tier: {{customer.tier}}
    Recent tickets: {{related_tickets.summaries}}
    Customer products: {{customer.products}}
    Success: Determine complexity (1-5), required expertise (General/Product/Engineering),
    urgency (Low/Medium/High/Critical), and whether this is part of a pattern."
  Output Type: Structured
  Fields: complexity_score (number), required_expertise (text), urgency (text),
          pattern_detected (checkbox), pattern_description (text),
          estimated_resolution_hours (number)

IF run_agent_task.pattern_detected = true
  → Add Watcher: Customer Success Manager
  → Post Comment: "Pattern detected: {{run_agent_task.pattern_description}}"

IF run_agent_task.urgency = "Critical"
  → Make Assignment: Senior Support (immediate)
  → Send Message to Teams: "@support-leads Critical ticket: {{ticket.subject}}"
ELSE IF run_agent_task.required_expertise = "Engineering"
  → Make Assignment: Engineering Team
  → Start Approval Process: Engineering time allocation
OTHERWISE
  → Make Assignment: General Support

Update Record Fields:
  - Complexity: {{run_agent_task.complexity_score}}
  - Estimated Hours: {{run_agent_task.estimated_resolution_hours}}

Send Email Notification: Customer confirmation with estimated timeline
This workflow uses AI Classification for basic categorization and Run Agent Task for nuanced assessment, then feeds agent output into standard routing logic.
Scenario: Automated first-pass contract review for a legal team.
Contract Attachment Added

AI File Analysis: Extract contract data

Search Records: Find customer and relationship history

Run Agent Task: "Contract Risk Assessment"
  Task: "Review contract from {{customer.name}} with value {{contract.value}}.
    Extracted terms: {{ai_file_analysis.terms}}
    Standard terms: {{company.standard_contract_terms}}
    Customer relationship: {{customer.relationship_years}} years,
                          LTV: ${{customer.lifetime_value}}
    Success: Assess risk (Low/Medium/High), identify deviations from standard,
    flag must-negotiate items, and recommend approval authority."
  Output Type: Structured
  Fields: risk_level (text), deviations (list), must_negotiate (list),
          recommended_approver (text), business_justification (text),
          expedite_recommended (checkbox)

Update Record Fields: Add risk assessment and recommendations

IF run_agent_task.risk_level = "High"
  → Start Approval Process: Legal + CFO
  → Send Message to Teams: "#legal High-risk contract requires review"
ELSE IF run_agent_task.risk_level = "Medium"
  → Start Approval Process: Legal only
OTHERWISE (Low risk)
  → IF contract.value < $10000
      Update Record Fields: Auto-approved
  → OTHERWISE
      Start Approval Process: Manager only
This pairs AI File Reader for data extraction with Run Agent Task for risk assessment that requires judgment.
Scenario: Provide personalized order handling based on customer history.
Order Created

Search Records: Get customer history

Run Agent Task: "Analyze Order Personalization"
  Task: "Analyze order from {{customer.name}} for {{order.items}}.
    Customer history: {{customer.past_orders}}, lifetime value: ${{customer.ltv}},
    preferences: {{customer.preferences}}.
    Success: Identify upsell opportunities, special handling needs, personalized
    message suggestions, and estimated satisfaction impact of personalization."
  Output Type: Structured
  Fields: upsell_items (list), special_handling (text), personalized_message (text),
          satisfaction_impact (text), include_sample (checkbox)

IF run_agent_task.include_sample = true
  → Update Record Fields: Add free sample to order

IF run_agent_task.upsell_items has values
  → Send Email: Personalized confirmation with recommendations
OTHERWISE
  → Send Email: Standard confirmation

Create Record: Log personalization actions for future learning

Advanced Patterns

For complex workflows, break work into sequential agent tasks where each builds on the previous:
Data Collected

Run Agent Task: "Initial Analysis"
  → Analyze raw data and identify key themes

Run Agent Task: "Deep Dive"
  → Task: "Based on initial themes {{agent_task_1.themes}},
          conduct detailed analysis..."

Run Agent Task: "Recommendations"
  → Task: "Given analysis {{agent_task_2.findings}},
          provide strategic recommendations..."
Use this when a single agent task would be too complex — breaking into stages improves output quality.
Use agents only when intelligence is needed, falling back to standard actions for straightforward cases:
Record Updated

IF simple_condition = true
  → Standard processing (fast, deterministic)
OTHERWISE
  → Run Agent Task (intelligent assessment)
  → Use agent insights for decision
This is the same pattern shown in the support ticket example in Workflow Examples, where AI Classification handles simple categorization and Run Agent Task handles complex cases.
Combine agent intelligence with human oversight using approval processes:
Contract Submitted

Run Agent Task: "Contract Assessment"

IF run_agent_task.risk_level = "Low" AND run_agent_task.confidence > 0.9
  → Auto-approve (agent sufficient)
OTHERWISE
  → Start Approval Process (human review)
  → Context: Agent provided {{run_agent_task.reasoning}}
Run Agent Task integrates with other automation actions:
  • AI File Reader — Extract structured data from documents, then pass to an agent for evaluation and recommendations. See the document review example in Workflow Examples.
  • API Requests — Use agent output to determine API endpoints or parameters, or feed API response data into an agent for synthesis.
  • AI Classification — Use classification for basic categorization, then route complex cases to Run Agent Task. See the support ticket example in Workflow Examples.

Agent Management

Agent Deletion Protection

When you attempt to delete an agent that’s used in automations, the system prevents deletion and shows a list of automations using that agent with direct links to each one. Before deleting an agent:
  1. Check which automations use it
  2. Update those automations to use a different agent or action
  3. Test the updated automations
  4. Then delete the agent

Getting Started Checklist

  1. Identify a use case where a workflow step requires reasoning or research
  2. Design the workflow using the structured → agent → structured pattern
  3. Create or select an agent with appropriate capabilities — see Agent Architecture
  4. Write a task definition with complete context and clear success criteria
  5. Define output fields if using structured output
  6. Test with real data using the Test & Preview feature
  7. Implement error handling with success field checks
  8. Deploy and monitor — review early executions, then iterate based on results

Next Steps

Automation Actions Reference

See all available automation actions including Run Agent Task details

AI in Automations

Learn about other AI-powered automation capabilities

Agent Architecture

Understand how agents work and integrate with workflows

Automation Best Practices

General automation and workflow best practices