AI Technical Specs

AI Models & Infrastructure

Large Language Models

Monte Carlo uses Anthropic's Claude models accessed via Amazon Web Services (AWS) Bedrock and customer data is never used for model improvement.

Model Availability by Region

Not all models are available in all AWS regions. The table below shows which model versions are used for Monte Carlo AI features based on your instance region:

Model VersionUSEUAPACModel Documentation
Claude Sonnet 4.5-Anthropic Trust Center
Claude Haiku 3.5--Anthropic Trust Center
Claude Sonnet 3.7--Anthropic Trust Center
Claude Haiku 3-Anthropic Trust Center

Model versions may be updated as newer Claude models become available in Bedrock

System Requirements

Optimal Requirements

For best AI feature performance:

  • Data sampling enabled: Required for some features, enhances others
  • Query logs enabled: Provides usage patterns and lineage data for AI analysis
  • Full lineage instrumentation: Connections to upstream/downstream systems
  • Active integrations: GitHub/GitLab, dbt, Airflow, or other orchestration tools
  • Historical data: At least 7-14 days of monitoring data for pattern recognition

Features That Require Data Sampling

Some AI capabilities require data sampling to be enabled:

  • Monitoring recommendations that analyze field values
  • Data pattern detection and anomaly identification
  • Features that assess data quality at the record level

When data sampling is disabled, these features become unavailable, while metadata-only features continue working.

Supported Use Cases

AI features are designed for:

  • Data quality monitoring: Recommending monitors, detecting anomalies, assessing coverage gaps
  • Incident investigation: Root cause analysis, hypothesis testing, correlating changes
  • Query assistance: SQL generation, query explanation, optimization suggestions
  • Conversational troubleshooting: Follow-up questions, iterative refinement, context retention