You can now configure monitor alert notifications and monitor failure notifications with different audiences in order to route issues to the appropriate stakeholders while minimizing noise and alert fatigue.

Last year, we released a big set of improvements to the Metric monitor that improved anomaly detection and the user experience on the Alert page. As part of this, we released a new metric property in Monitors as Code and the API.

Users have still been able to create monitors under the old “field_health” property, though its use to create new monitors has dwindled.

As of March 1, 2025:

  • Users will no longer be able to create new monitors using the field_health property. Existing monitors using that property will continue to function without interruption and will still be editable. When creating new Metric monitors, users should use the metric property instead. The differences between the field_health and metric properties are modest, and a guide for how to navigate the two is here.
  • Users will no longer be able to use the createOrUpdateMonitor mutation to create a Metric monitor via the API/CLI. Customers should use createOrUpdateMetricMonitor mutation instead. Existing monitors will continue to function without interruption.

We're simplifying how users provide feedback to our anomaly detection models.

Up until recently, users had the option to "Share feedback" on any Alert in Monte Carlo. If the user marked the anomaly as "Helpful" in that workflow, then Monte Carlo would remove that anomaly from the set of data that was used to train thresholds. This would then restore thresholds to pre-anomaly levels.

This workflow has been deprecated and removed. Rationale:

  • This workflow had a duplicative effect to marking the alert status as "Fixed". Having multiple workflows with the same effect was confusing.
  • We're transitioning to a model where anomalies are excluded from training by default, and users can add them back in through a "Mark as normal" workflow. Read more.

Read more about how to interact with our anomaly detection here.

The volume monitor has been rebuilt to address a broad set of customer feedback from the past year. This will be a phased release to customers over the next few weeks, and it is currently available to just a subset of customers. These improvements apply just to Volume, but we intend to expand them to all monitors over the coming months.

Key improvements:

  • Dramatically better anomaly detection. The thresholds are much tighter, particularly for very regular tables.
  • Thresholds don't automatically widen after an anomaly. Instead, anomalies are excluded by default from the set of training data. If users don’t want to be alerted to similar anomalies in the future, they can ‘Mark as normal’. This will re-introduce the anomalous data point to the training set and widen the threshold.
  • Users can 'select training data' directly in the chart. This gives the user control over which data is used to train the model. This is especially useful for excluding undesirable data when the monitor is initially training.

Read more about these improvements.

Anomalies are now excluded from the set of training data by default, so that thresholds don't widen. 

If the user does not want to be alerted to similar anomalies in the future, they can "Mark as normal" to re-introduce the anomaly to the set of training data.

Anomalies are now excluded from the set of training data by default, so that thresholds don't widen.

If the user does not want to be alerted to similar anomalies in the future, they can "Mark as normal" to re-introduce the anomaly to the set of training data.


Easily exclude periods of undesirable behavior from the set of training data.

Easily exclude periods of undesirable behavior from the set of training data.

We built Data Profiler to help technical and non-technical users quickly deploy effective data quality monitoring on important assets. We're now introducing recommendations powered by large language models (LLMs) that analyze data samples, table metadata, and query logs to tailor monitoring to each asset:

  • Multi-column validations to monitor relationships between columns e.g. start_date should be before end_date.
  • Regular expression validations to enforce arbitrary patterns and formats in data that go beyond basic checks like UUID matching or email format.
  • Segmented volume and freshness to track row count or time since last row count change for segmented data such as sales by store or inventory by warehouse.

Several weeks ago we shipped a big set of improvements to freshness monitoring. Specifically, support for week-over-week seasonality. Some people call these "bimodal" thresholds, often because they expect the weekend to have a different threshold than weekdays (two modes).

We've now released the same week-over-week seasonality support for the "Time since last row count change" monitor. This is one of the key monitors we deploy “out of the box” on a table, so the impact is very wide. Specific changes:

  • We more accurately recognize weekends. We dynamically recognize weekends regardless of where they fall in the week (e.g. Fri/Sat, or when there’s just a single day “off”)
  • Tighter thresholds in the “active” period of the week. 110K tables across our customer base now have thresholds at least 25% tighter than before
  • Thoughtful thresholds in the “off” part of the week. So if the table doesn’t start back up on Monday, we’ll alert.

In the future, we’ll also add bimodality for day/night behaviors.

  1. Users can now select dbt jobs for a domain similar to how they select tables, enabling users of a particular domain to only see jobs relevant for them.
  2. Users can now select dbt jobs for an audience, enabling them to route all alerts on the selected dbt jobs to that audience.

Users can now set up a Snowflake integration using a Snowflake Native App (SNA).

This deployment offers an alternative to the connection process outlined here, with the following benefits:

  1. No Snowflake credentials are stored in the Monte Carlo Cloud.
  2. Connectivity is initiated from your Snowflake account to Monte Carlo.
  3. No cloud deployments (e.g., in AWS or Azure) are needed, as the agent is hosted in Snowflake as a Native App.
  4. You may be able to use Snowflake credits for Monte Carlo. Contact your Monte Carlo and Snowflake representatives for more information.

Learn more about the SNA agent (public preview) at this link: https://mc-d.io/sna

SNA Listing Example

Snowflake Marketplace

Users can now segment by up to 5 fields by creating a Metric Monitor. The previous limit was 2 fields. As we've increased other limits on scale in metric monitors, it made sense to increase this one as well to support more heavily segmented use cases.

Users could previously accommodate this need by concatenating those fields in the "By SQL Expression" option, but that was clunky.

Read more about segmentation in Metric monitors.