Monitoring Agent Technical Specs

Purpose & Functionality

Generates intelligent recommendations for custom data quality monitors; assesses schema, metadata, query history, and data samples to detect patterns and coverage gaps; recommends monitor logic (e.g., anomaly detection, null thresholds, formatting consistency.)

Model Details & Testing

Source of AI Model [Application]

Internally developed by Monte Carlo, including the agent orchestration, recommendation logic, context and user interface.

Source of AI Model [Foundational Model]

Licensed from third-party – Anthropic; accessed via Amazon Bedrock

Model Provider

Anthropic via Amazon Bedrock & OpenAI

Model Type

LLM-powered agent

Version

Claude Haiku 3.5 + GPT-4o

Testing [Application]

Monte Carlo only uses internal test data for testing purposes. No customer data is used to test or train.

Testing [Foundational Model]

Responsibility of the respective AI model provider.

Fairness and Bias

No applicable due to domain-specific, non-human context

Primary and optional use cases

System Requirements

Works with existing implementation of Monte Carlo, operating on the customer's cloud data warehouse. Data Sampling must be turned on.

Prompt/Input Requirements

Automatically gathered by Monte Carlo: structured inputs include data schema, historical query logs, column-level profiles, and metadata.

Acceptance Rate

~60% of Monitoring Agent recommendations are accepted by users

Continuous Monitoring Plan

Monte Carlo captures telemetry data (using OpenTelemetry) on the model behaviors; Monte Carlo monitors quality of responses in production using LLM-as-a-judge monitors; customer feedback loop informs future iteration of the agent.