Formerly known as Field Metrics, the new experience for Metric Monitors provides more flexibility and functionality to do deep quality checks on a given table, including:

  • Feature parity between metrics monitored using automated vs manual thresholds. Segmentation, scheduling options, and filtering options are now the same regardless of if a monitor is using manual or automated thresholds.
  • Manual and automated thresholds can now be set using a single monitor. For example, I could monitor 3 field for Unique (%) with an automated threshold, and 5 different fields for Null (%) using a manual threshold, all within a single monitor.
  • New UI to create Metric Monitors, emphasizing ease of use.

To learn more, check out our documentation on Metric Monitors.


In the Usage UI, there are now options to set Monitoring Rules at the Warehouse and Database levels. When a rule is set at one of these levels, all lower levels will show a read-only rule with references back to the higher level where it was initially configured. Additional rules can be added in include/exclude at lower levels and are OR'ed with the inherited ones.

Use these rules to specify conditions you want inherited down to the schema level. Some examples:

  1. You want to prevent any table with "_dev" in its name from ever being monitored.
    1. Add a rule at the Warehouse level to Except tables where table name contains _dev
  2. You want to ensure that tables without recent read or write activity on them are not monitored
    1. Add a rule at the Warehouse level to Monitor tables where there was activity in the last 30 days


We've added a rule under the Usage UI to let you select what tables to monitor based on tags. Under a Schema in the Usage UI, you can select Monitor tables where "tag" is one of: and specify a multi-select list of tags. This accompanies the rest of the rules we support, for a full list, see our documentation.

We've added two new rules to help you select which tables to monitor. Under the Schema level in the Usage UI there are two new rule types that can be added as include/exclude conditions:

  1. Tables of a certain type (e.g. View, External, etc)
  2. Last Activity was within 30 days

Tables of a certain type

This rule type can be used to make sure a certain type of asset is included or excluded from monitoring. Supported types depend on your warehouse and integration with Monte Carlo.

Last Activity was within 30 days

This rule can be used to ensure that only recenly active tables are monitored by Monte Carlo. "Recent Activity" is defined as any Read or Write activity.

Users of Data Explorer can now drill-down into a field profile metric in Data Explorer to see how that metric has trended over time. This helps a data analyst to validate an issue reported by a business partner. For example, if someone in marketing ops is reporting they’re suddenly seeing a bunch of nulls in a key field, it’s now much easier to come in and validate that spike in nulls without ever writing SQL or setting up a monitor.

We've made key improvements to the automatic threshold option in SQL Rules, giving users more accuracy and more control in detecting anomalies. The automatic threshold now supports:

  • Low, medium, or high sensitivity. Users can adjust this through the UI or Monitors as Code, allowing for tighter or looser thresholds.
  • Removing historical true positives from the machine learning, preserving the sensitivity of the thresholds. Incidents marked as fixed or helpful will be taken into account when training our ML model.
  • Refactored model based on user feedback to improve accuracy.
When configuring a SQL Rule with an automatic threshold, users can now select between Low, Medium, or High sensitivity.

When configuring a SQL Rule with an automatic threshold, users can now select between Low, Medium, or High sensitivity.

For customers that have enabled the ServiceNow integration, the state of incidents from ServiceNow can now be sync'd back to incident status in Monte Carlo. This reduces toil of keeping multiple systems up to date.

Users can indicate the mapping of ServiceNow state → incident status in Settings > Integrations. See our ServiceNow docs for more details.

Users can now more easily allocate query cost from custom monitors back to the teams that set them up. This was a functionality previously available via API only, which can now more easily be done in UI.

Users can:

  • Create additional connections to a warehouse and associate them with different warehouse service users (one for each team).
  • Once more than one connection has been created for a warehouse, you can now select which connection to use in the monitor creation flow.
  • Queries from that monitor will then be executed through the different service users, allowing you to allocate cost back to to the corresponding team.

To learn more, check out our documentation.

Last month, we released 35+ new Field Metrics. At the time, only manual thresholds could be used on those new metrics. Today, we’ve released support for ML thresholds for many of those new metrics!

Specifically, we now support ML thresholds for the following:

  • Empty string (%)
  • NaN (%)
  • Standard Deviation
  • True (%)
  • False (%)
  • SSN (%)
  • USA Phone Number (%)
  • USA state code (%)
  • USA ZIP code (%)
  • Email (%)
  • Timestamp (%)
  • In past (%)
  • Unix time 0 (%)

In addition, you can now select the following as standalone metrics with ML thresholds (previously, they were all grouped together):

  • Min
  • 20th percentile
  • 40th percentile
  • 60th percentile
  • 80th percentile
  • Max

To see the full list of supported Field Metrics and available threshold types, see our Available Metrics documentation.

Data Explorer is a new tab on Assets that makes it easy to profile the contents of a table or view. This is helpful when investigating a data quality issue reported by a business partner, when considering which monitors to create, or when getting familiar with the contents of a table.

The experience is interactive and no-code, making it approachable for less technical roles. Users can point and click to adjust the time range of data and filter down for particular segments. In the future, we'd like to make it easy to compare multiple segments of data side-by-side.

Currently, Data Explorer is available for just a subset of customers and for the Snowflake integration only. Your Monte Carlo Agent must be version 16624 or higher. To learn more, check out our documentation.

**Filters**, **Row count**, and **Segments** sections of Data Explorer. Adjust the slider in Row count and click on values in Segments to filter the rest of the data in the dashboard.

Filters, Row count, and Segments sections of Data Explorer. Adjust the slider in Row count and click on values in Segments to filter the rest of the data in the dashboard.

**Field profile** and **Sample rows** sections of Data Explorer.

Field profile and Sample rows sections of Data Explorer.