For dbt test failures and warnings, we now added in the incident details view
i) the compiled SQL code,
ii) the ability to run that SQL and return the rows,
iii) the ability to edit those queries for ad hoc investigations.

Metric monitors now allow users to define a custom metric using SQL. This allows users to create a metric that:

  • Isn't available off the shelf, such asAPPROX_PERCENTILE(order_amount, 0.99)
  • References multiple fields at once, such asAVG(price + tax_collected)

Currently, only manual thresholds are supported for custom metrics. See our docs to learn more.

Duplicate (count) is now available within Metric monitors. This metric is a useful alternative to the Unique (%)metric, particularly if you are looking to ensure a column is always 100% unique. On large tables, a small number of duplicates can cause 100% unique to become 99.99999% unique. Which can be hard to visualize in charts and can sometimes be rounded off.

Duplicate (count) is a much easier way to visualize uniqueness. Since if a duplicate emerges, it goes from 0 to 1 instead of 100% to 99.99999% unique.

Duplicate (count) is calculated as the difference between the total number of rows with non-null values and total number of distinct non-null values.

Check out our full list of available metrics.

We have revamped the account onboarding to streamline getting started with data observability for your organization. Think of it as a fresh coat of paint, but the engine got a big tune up too.


Monte Carlo has identified an issue with custom monitors that use Dynamic schedules on VIEW table types. Currently, Dynamic schedules require freshness information to function correctly, which isn't available for VIEW tables.

As a result, these monitors have defaulted to running every 48 hours. To ensure consistent monitoring and to avoid future issues, we will automatically update all affected monitors to fixed schedules with a 48-hour interval starting yesterday, April 24th. Additionally, we will implement a new validation rule to prevent Dynamic schedules from being applied to VIEW tables.

  • If you NOT are using MaC: No action is needed unless you wish to adjust the frequency of your monitor schedules to shorter than 48 hours.
  • If you are using MaC: Starting yesterday, April 24th, you will be unable to edit these monitors through MaC without switching to a fixed schedule due to the new validation rule. If you attempt to apply without having changed any of these monitors, it will still work! However, once one of these monitors are edited, if the schedule is not also modified, the apply will throw an error. We recommend updating your monitors to fixed schedules to avoid disruption.

If you have any questions about this change or would like a list of monitors impacted by this change for your account, please reach out to our support team at [email protected].

This can be used in situations like “Alert if the mean of sale_price is not between 500 and 1,000.”

The behavior here is not inclusive, meaning the above example could also be read as “Alert me if the mean of sale_price is <500 or >1000.”


To make it simpler to create Metric monitors, we've shipped the option to "Recommend configuration". When the button is clicked, Monte Carlo will run some queries on the table in order to recommend settings for the Aggregation and Set schedule sections of the monitor. Configuring these sections properly is important in order for the monitor to do effective anomaly detection.

To learn more, see our documentation.

Defining a Data Product now tags all assets and their upstream tables with a tag in Monte Carlo. This tag can be monitored in the Usage UI to ensure monitoring coverage for that Data Product.

This provides a better way to define these "use cases", "workloads", "workflows", etc. that deliver some collection of trusted data. Once a Data Product is defined, all related upstream tables are automatically tagged by Monte Carlo and can be monitored in the Usage UI, ensuring full monitoring coverage for the Data Product. This allows you to focus on monitoring these larger "workloads" that are important to your business, vs having to hunt around for individual tables to include in monitoring one-at-a-time.

How does it work?

  1. Define the Data Product - When defining a Data Product, you should think about including the "interfaces" that make up your Data Products. These are the BI Report or the tables that users, APIs, or other data consumers are reading from.
  2. Monte Carlo tags all upstream tables - Once the assets are included and the Data Product is saved, Monte Carlo will automatically look upstream using lineage to tag all tables that are connected to those assets defined in the Data Product.
  3. Add a rule to monitor the tag - You can then choose to monitor all tables with that tag from the Data Product. This will ensure that all tables that relate to that Data Product are monitored by Monte Carlo.
  4. See coverage overview and same great dashboard - The Data Product provides an overview of its monitoring coverage and a dashboard to see current incidents.

See a full video walkthrough: