Overview

Google Gemini provides advanced AI capabilities through Google Cloud’s Vertex AI platform, offering advanced language models and multimodal AI features. This guide walks you through setting up Google Gemini as an AI Provider in Elementum.
Prerequisites: You’ll need a Google Cloud account with billing enabled and access to Vertex AI APIs.

Step 1: Set Up Google Cloud Project

Create or Select a Project

  1. Access Google Cloud Console
  2. Create a New Project (or select an existing one)
    • Click on the project selector at the top of the page
    • Click “New Project”
    • Enter a project name (e.g., “Elementum AI Integration”)
    • Select your billing account
    • Click “Create”
  3. Enable Billing
    • Ensure your project has billing enabled
    • Navigate to Billing in the left sidebar
    • Link a billing account if not already configured

Enable Required APIs

Enable the necessary APIs for Vertex AI access:
1

Navigate to APIs & Services

In the Google Cloud Console, go to APIs & ServicesLibrary
2

Enable Vertex AI API

Search for “Vertex AI API” and click “Enable”This may take a few minutes to complete
3

Enable Cloud Resource Manager API

Search for “Cloud Resource Manager API” and click “Enable”This is required for project access
4

Enable Additional APIs (Optional)

Depending on your needs, you may also want to enable:
  • AI Platform API
  • Cloud Translation API
  • Cloud Vision API

Step 2: Create Service Account

Generate Service Account

1

Navigate to IAM & Admin

In the Google Cloud Console, go to IAM & AdminService Accounts
2

Create Service Account

Click “Create Service Account”Service Account Name: Enter a descriptive name (e.g., “elementum-ai-service”)Service Account ID: Will be auto-generatedDescription: Optional description for the service account
3

Grant Permissions

Assign the following roles to your service account:Required Roles:
  • Vertex AI User - Access to Vertex AI models
  • ML Engine Developer - Access to ML Engine APIs
Optional Roles (for advanced features):
  • BigQuery User - If integrating with BigQuery
  • Storage Object Viewer - If accessing Cloud Storage
4

Complete Creation

Click “Continue” and then “Done” to create the service account

Generate Service Account Key

1

Access Service Account

In the Service Accounts list, click on your newly created service account
2

Create Key

Go to the Keys tabClick “Add Key”“Create new key”
3

Select Key Type

Choose JSON as the key typeClick “Create”
4

Download Key File

The JSON key file will be automatically downloadedImportant: Store this file securely - it contains credentials for your service account
Security Critical: The service account key file contains sensitive credentials. Store it securely and never commit it to version control or share it publicly.

Step 3: Configure Gemini in Elementum

Adding the Provider

  1. Navigate to Organization Settings
    • In Elementum, go to Organization Settings
    • Select the Providers tab
  2. Create New Provider
    • Click ”+ Provider”
    • Select Gemini from the provider options
  3. Configure Provider Settings
Provider Name: Enter a descriptive name (e.g., “Google Gemini Production”)Location: Select your Google Cloud region (e.g., “us-central1”)Project ID: Enter your Google Cloud project IDCloudLink: Select “All CloudLinks” unless you have specific requirements
  1. Save Configuration
    • Click “Save” to create the provider
    • The system will validate your credentials and project access

Step 4: Test Your Connection

Verification Process

  1. Connection Test
    • After saving, Elementum will automatically test the connection
    • Look for a green checkmark indicating successful connection
  2. Model Availability
    • Navigate to Services tab
    • Click ”+ Service” to see available models
    • You should see Google Gemini models listed

Available Models

Your Gemini provider will give you access to:

Language Models (LLMs)

ModelPrimary Use CaseSpeedIntelligenceBest For
Gemini 2.5 ProComplex reasoning and large responses⭐⭐⭐⭐⭐⭐⭐Most complicated use cases, long-form content, advanced analysis
Gemini 2.5Balanced performance⭐⭐⭐⭐⭐⭐⭐⭐General-purpose tasks, customer support, content creation
Gemini 1.5 ProEnhanced context understanding⭐⭐⭐⭐⭐⭐⭐Large document analysis, comprehensive understanding
Gemini 1.5 FlashSpeed-optimized responses⭐⭐⭐⭐⭐⭐⭐⭐Quick responses, simple automation, cost-effective
Model Recommendations: Use Gemini 2.5 for most applications requiring balanced performance. Choose Gemini 2.5 Pro for the most complicated use cases and when you need large, detailed responses. Gemini 1.5 Flash is ideal for speed-critical applications.
Note: Embeddings for AI Search are handled exclusively through Snowflake Cortex. Gemini models are used for LLM services only.

Step 5: Create AI Services

With your Gemini provider configured, create AI Services:
1

Navigate to Services

In Organization Settings, go to the Services tab
2

Create LLM Service

Click ”+ Service” and select LLM (Language Model service)Service Name: Give it a descriptive name (e.g., “Gemini Customer Support”)Provider: Select your configured Gemini providerModel: Select from available Gemini modelsCost Per Million Tokens: Set for cost tracking (optional)
3

Test Services

Use the built-in testing interface to verify services work correctly

Usage Guidelines

Cost Management

Google Cloud charges for Vertex AI usage. To manage costs:

Monitor Usage

Cloud Console: Monitor usage in Google Cloud ConsoleBilling Alerts: Set up billing alerts for cost controlQuotas: Review and adjust API quotas as neededUsage Reports: Regular review of usage patterns

Optimize Usage

Model Selection: Choose appropriate models for tasksBatch Requests: Process multiple requests togetherCaching: Cache responses when possibleRequest Optimization: Minimize unnecessary API calls

Best Practices

  1. Model Selection
    • Use Gemini 2.5 for most general-purpose tasks and customer support
    • Use Gemini 2.5 Pro for the most complicated use cases and large responses
    • Use Gemini 1.5 Flash for speed-critical applications
  2. Security
    • Regularly rotate service account keys
    • Follow principle of least privilege
    • Monitor service account usage
    • Use appropriate IAM roles for access control
  3. Performance
    • Select regions closest to your users
    • Choose Gemini 2.5 Pro for tasks requiring detailed analysis
    • Use Gemini 1.5 Flash for high-volume, simple tasks
    • Implement retry logic for transient errors

Troubleshooting

Advanced Configuration

Multi-Region Setup

For global deployments:
  1. Region Selection: Choose regions closest to your users
  2. Data Residency: Consider data residency requirements
  3. Failover: Implement failover strategies across regions
  4. Compliance: Ensure regional compliance requirements

Custom Model Access

For specialized or private models:
  1. Model Registration: Register custom models in Vertex AI
  2. Access Control: Configure proper IAM permissions
  3. Performance Tuning: Optimize for your specific use case
  4. Monitoring: Set up custom monitoring and alerting

Integration with Other Google Services

BigQuery Integration

Direct Analysis: Run AI models directly on BigQuery dataData Pipeline: Integrate with your data pipelineCost Optimization: Reduce data movement costs

Cloud Storage Integration

File Processing: Process files stored in Cloud StorageBatch Processing: Handle large-scale file processingBackup & Recovery: Secure storage for AI assets

Security Considerations

Service Account Security

Key Management: Regularly rotate service account keysAccess Control: Use IAM roles for fine-grained accessMonitoring: Monitor service account usageAudit Logging: Enable audit logging for security

Data Privacy

Data Handling: Understand Google’s data handling policiesCompliance: Ensure compliance with data regulationsEncryption: Data encrypted in transit and at restAccess Logs: Monitor data access patterns

Next Steps

With Google Gemini configured as your AI Provider:
Google Gemini provides cutting-edge AI capabilities with multimodal support and seamless integration with Google Cloud services. Once configured, you can leverage these advanced models across all of Elementum’s AI features.