Monitoring Agent Technical Specs

Purpose & Functionality

Internally developed by Monte Carlo, including the agent orchestration, recommendation logic, context and user interface.

Generates intelligent recommendations for custom data quality monitors; assesses schema, metadata, query history, and data samples to detect patterns and coverage gaps; recommends monitor logic (e.g., anomaly detection, null thresholds, formatting consistency.)

Model Details & Testing

Source Foundational Models

Monte Carlo uses models Anthropic Claude models accessed via Amazon Bedrock. Because not all models are available in all regions, please reference the chart below to determine which model and version will be used.

Model & VersionUS RegionAPAC RegionEU RegionSystem (Model) Card
Claude Haiku 3.5XLink (Anthropic Trust Center)
Claude 3 HaikuXXLink (Anthropic Trust Center)

Testing [Application]

Monte Carlo only uses internal test data for testing purposes. No customer data is used to test or train.

Testing [Foundational Model]

Responsibility of the AI model provider.

Fairness and Bias

No applicable due to domain-specific, non-human context

Primary and optional use cases

System Requirements

Works with existing implementation of Monte Carlo, operating on the customer's cloud data warehouse. Data Sampling must be turned on.

Prompt/Input Requirements

Automatically gathered by Monte Carlo: structured inputs include data schema, historical query logs, column-level profiles, and metadata.

Continuous Monitoring Plan

Monte Carlo captures telemetry data (using OpenTelemetry) on the model behaviors; Monte Carlo monitors quality of responses in production using LLM-as-a-judge monitors; customer feedback loop informs future iteration of the agent.