Skip to main content

Overview

This powerful workflow enables you to process external files automatically, extract data, and trigger intelligent workflows based on file content. The Snowflake stage file access workflow consists of six main steps:
  1. Create a Snowflake view for stage files
  2. Import the view as an Elementum table
  3. Build a Data Mine to monitor for new or changed files
  4. Create an automation triggered by the Data Mine
  5. Process files using the presigned URLs in your automation
  6. Add additional actions to your automation
This workflow enables you to securely access and process files stored in Snowflake stages - without duplicating the data outside of Snowflake. By default, files are not persisted or stored outside of Snowflake; your data remains protected and centralized unless you explicitly configure otherwise.

Prerequisites

Before starting this workflow, ensure you have:
  • Snowflake access with permissions to create views and access stages
  • Elementum CloudLink configured and connected to your Snowflake instance
  • Files uploaded to a Snowflake stage (e.g., @files_to_process)
  • Directory Table enabled on your Snowflake stage for file listing and metadata access
The stage you use must have a Directory Table enabled. Enable it with:
ALTER STAGE <your_stage_name> SET DIRECTORY = (ENABLE = TRUE);

Step 1: Create Snowflake View from a Stage

The first step is creating a Snowflake view that provides access to your stage files with presigned URLs for secure access. Execute this SQL in your Snowflake environment:
CREATE VIEW STAGE_FILES_VIEW AS 
SELECT RELATIVE_PATH,
        SIZE,
        LAST_MODIFIED,
        MD5,
        get_presigned_url(@files_to_process, RELATIVE_PATH, 3600) AS presigned_url
FROM DIRECTORY(@files_to_process);
  • RELATIVE_PATH: File path within the stage
  • SIZE: File size in bytes
  • LAST_MODIFIED: Timestamp of last file modification
  • MD5: File hash for integrity checking
  • presigned_url: Secure, time-limited URL for file access (valid for 3600 seconds = 1 hour)
The presigned_url is automatically regenerated every time the Data Mine runs, ensuring that URLs are always fresh and valid. This means you never have to worry about URL expiration interrupting your automated file processing.

Step 2: Import View as Elementum Table

Once your Snowflake view is created, import it into Elementum as a table.
  1. Navigate to TablesExplore DataCloudLink
  2. Select your Snowflake connection and choose the view you created
  3. Click “Create Table” and fill out the details

Step 3: Build Data Mine for File Monitoring

Create a Data Mine to automatically detect when new files arrive or existing files change.
  1. In your table, go to Data MiningCreate Data MineLogic-Based Rules Mining
  2. Identifying Columns: Select RELATIVE_PATH, LAST_MODIFIED, and MD5
These columns work together to track individual files across Data Mine runs, detect when files are modified or replaced, and ensure accurate state management (ON/OFF transitions).
  1. Matching Criteria: Set filters for file types or conditions (optional)
  2. Name and Schedule: Give it a name and set check frequency

Step 4: Create Automation with Data Mine Trigger

Build an automation that processes files when the Data Mine detects them.
Your automation will follow this logical flow: Data Mine TriggerProcess FileTake Additional Actions (e.g. AI Analysis)
  1. Navigate to AutomationsCreate Automation
  2. Add Data Mine Trigger and select your Data Mine
  3. Set trigger option to “Trigger when data meets requirement”

Step 5: Process Files Using API Request Action

Add an API Request action to your automation to access files stored in the Snowflake stage.

API Request action details:
  • Request URL: $PRESIGNED_URL
  • Method: GET
  • Authorization: No Auth
  • Response Type: File
Variable Reference: The $PRESIGNED_URL variable comes from the Data Mine trigger, providing access to all fields from the matching stage file record.

Step 6: Add Additional Actions

After accessing the file via the API Request action, any additional actions you add will now have access to the file content.

Summary

This workflow provides a powerful way to automatically process files stored in Snowflake stages:
  1. Snowflake View provides secure access to stage files
  2. Elementum Table makes stage files accessible in your workspace
  3. Data Mine automatically detects new or changed files
  4. Automation provides access to the file content
  5. Additional Actions enable AI analysis, data extraction, and workflow automation
By following this guide, you can create a robust, automated file processing system that transforms your Snowflake stage into an intelligent workflow trigger, enabling your business to automatically respond to new data as it arrives.

Appendix: Quick Test Setup in Snowflake

Use the following SQL to create a stage in Snowflake for testing purposes. Replace the ALL_CAPS placeholders with your actual values. For complete configuration options, see Snowflake’s CREATE STAGE documentation.

Create Stage and View

USE DATABASE DATABASE_NAME;
USE SCHEMA SCHEMA_NAME;

-- Create internal stage with directory table enabled
CREATE OR REPLACE STAGE STAGE_NAME;
ALTER STAGE STAGE_NAME SET DIRECTORY = (ENABLE = TRUE);

-- Create view with presigned URLs (1-hour expiration)
CREATE OR REPLACE VIEW VIEW_NAME AS 
SELECT RELATIVE_PATH,
       SIZE,
       LAST_MODIFIED,
       MD5,
       get_presigned_url(@STAGE_NAME, RELATIVE_PATH, 3600) AS presigned_url
FROM DIRECTORY(@STAGE_NAME);
Upload a test file to verify the stage is working correctly:
-- Using SnowSQL CLI
PUT file://path/to/test-file.csv @DATABASE_NAME.SCHEMA_NAME.STAGE_NAME 
    OVERWRITE=TRUE 
    AUTO_COMPRESS=FALSE;
You can also upload files through the Snowflake web interface by navigating to your stage and using the “Upload Files” option.
I