The volume monitor has been rebuilt to address a broad set of customer feedback from the past year. This will be a phased release to customers over the next few weeks, and it is currently available to just a subset of customers. These improvements apply just to Volume, but we intend to expand them to all monitors over the coming months.

Key improvements:

  • Dramatically better anomaly detection. The thresholds are much tighter, particularly for very regular tables.
  • Thresholds don't automatically widen after an anomaly. Instead, anomalies are excluded by default from the set of training data. If users don’t want to be alerted to similar anomalies in the future, they can ‘Mark as normal’. This will re-introduce the anomalous data point to the training set and widen the threshold.
  • Users can 'select training data' directly in the chart. This gives the user control over which data is used to train the model. This is especially useful for excluding undesirable data when the monitor is initially training.

Read more about these improvements.

Anomalies are now excluded from the set of training data by default, so that thresholds don't widen. 

If the user does not want to be alerted to similar anomalies in the future, they can "Mark as normal" to re-introduce the anomaly to the set of training data.

Anomalies are now excluded from the set of training data by default, so that thresholds don't widen.

If the user does not want to be alerted to similar anomalies in the future, they can "Mark as normal" to re-introduce the anomaly to the set of training data.


Easily exclude periods of undesirable behavior from the set of training data.

Easily exclude periods of undesirable behavior from the set of training data.

We built Data Profiler to help technical and non-technical users quickly deploy effective data quality monitoring on important assets. We're now introducing recommendations powered by large language models (LLMs) that analyze data samples, table metadata, and query logs to tailor monitoring to each asset:

  • Multi-column validations to monitor relationships between columns e.g. start_date should be before end_date.
  • Regular expression validations to enforce arbitrary patterns and formats in data that go beyond basic checks like UUID matching or email format.
  • Segmented volume and freshness to track row count or time since last row count change for segmented data such as sales by store or inventory by warehouse.

Several weeks ago we shipped a big set of improvements to freshness monitoring. Specifically, support for week-over-week seasonality. Some people call these "bimodal" thresholds, often because they expect the weekend to have a different threshold than weekdays (two modes).

We've now released the same week-over-week seasonality support for the "Time since last row count change" monitor. This is one of the key monitors we deploy “out of the box” on a table, so the impact is very wide. Specific changes:

  • We more accurately recognize weekends. We dynamically recognize weekends regardless of where they fall in the week (e.g. Fri/Sat, or when there’s just a single day “off”)
  • Tighter thresholds in the “active” period of the week. 110K tables across our customer base now have thresholds at least 25% tighter than before
  • Thoughtful thresholds in the “off” part of the week. So if the table doesn’t start back up on Monday, we’ll alert.

In the future, we’ll also add bimodality for day/night behaviors.

  1. Users can now select dbt jobs for a domain similar to how they select tables, enabling users of a particular domain to only see jobs relevant for them.
  2. Users can now select dbt jobs for an audience, enabling them to route all alerts on the selected dbt jobs to that audience.

Users can now set up a Snowflake integration using a Snowflake Native App (SNA).

This deployment offers an alternative to the connection process outlined here, with the following benefits:

  1. No Snowflake credentials are stored in the Monte Carlo Cloud.
  2. Connectivity is initiated from your Snowflake account to Monte Carlo.
  3. No cloud deployments (e.g., in AWS or Azure) are needed, as the agent is hosted in Snowflake as a Native App.
  4. You may be able to use Snowflake credits for Monte Carlo. Contact your Monte Carlo and Snowflake representatives for more information.

Learn more about the SNA agent (public preview) at this link: https://mc-d.io/sna

SNA Listing Example

Snowflake Marketplace

Users can now segment by up to 5 fields by creating a Metric Monitor. The previous limit was 2 fields. As we've increased other limits on scale in metric monitors, it made sense to increase this one as well to support more heavily segmented use cases.

Users could previously accommodate this need by concatenating those fields in the "By SQL Expression" option, but that was clunky.

Read more about segmentation in Metric monitors.

A frustrating limitation has been removed from Metric monitors.

Previously, when using segmentation, users were limited to monitoring only a single metric. That limitation is now removed, and instead users are limited to tracking 10,000 combinations of metrics/fields/segments within a single monitor. This gives users more flexibility to meet their metric monitoring needs with less configuration.

See this doc to learn more.


Error when exceeding the limit of 10,000 segment/metric/field combinations during monitor creation

Error when exceeding the limit of 10,000 segment/metric/field combinations during monitor creation

Setting variable values for Custom SQL monitors is more flexible now. While it has been possible to define static values for a while, it is now possible to set variable values at runtime via both the UI and API.

Excluded tables are now called out separately in the upstream coverage charts across the UI. This helps users distinguish between upstream tables that are excluded from being monitored through an explicit rule in the Usage UI, from those that do not have any matching rules to include or exclude them in monitoring. These new details will impact upstream coverage charts in several places across our UI:

  • Data Product list page
  • Data Product Coverage tab
  • Asset Search page
  • Asset details page