We have revamped the account onboarding to streamline getting started with data observability for your organization. Think of it as a fresh coat of paint, but the engine got a big tune up too.


Monte Carlo has identified an issue with custom monitors that use Dynamic schedules on VIEW table types. Currently, Dynamic schedules require freshness information to function correctly, which isn't available for VIEW tables.

As a result, these monitors have defaulted to running every 48 hours. To ensure consistent monitoring and to avoid future issues, we will automatically update all affected monitors to fixed schedules with a 48-hour interval starting yesterday, April 24th. Additionally, we will implement a new validation rule to prevent Dynamic schedules from being applied to VIEW tables.

  • If you NOT are using MaC: No action is needed unless you wish to adjust the frequency of your monitor schedules to shorter than 48 hours.
  • If you are using MaC: Starting yesterday, April 24th, you will be unable to edit these monitors through MaC without switching to a fixed schedule due to the new validation rule. If you attempt to apply without having changed any of these monitors, it will still work! However, once one of these monitors are edited, if the schedule is not also modified, the apply will throw an error. We recommend updating your monitors to fixed schedules to avoid disruption.

If you have any questions about this change or would like a list of monitors impacted by this change for your account, please reach out to our support team at [email protected].

This can be used in situations like “Alert if the mean of sale_price is not between 500 and 1,000.”

The behavior here is not inclusive, meaning the above example could also be read as “Alert me if the mean of sale_price is <500 or >1000.”


To make it simpler to create Metric monitors, we've shipped the option to "Recommend configuration". When the button is clicked, Monte Carlo will run some queries on the table in order to recommend settings for the Aggregation and Set schedule sections of the monitor. Configuring these sections properly is important in order for the monitor to do effective anomaly detection.

To learn more, see our documentation.

Defining a Data Product now tags all assets and their upstream tables with a tag in Monte Carlo. This tag can be monitored in the Usage UI to ensure monitoring coverage for that Data Product.

This provides a better way to define these "use cases", "workloads", "workflows", etc. that deliver some collection of trusted data. Once a Data Product is defined, all related upstream tables are automatically tagged by Monte Carlo and can be monitored in the Usage UI, ensuring full monitoring coverage for the Data Product. This allows you to focus on monitoring these larger "workloads" that are important to your business, vs having to hunt around for individual tables to include in monitoring one-at-a-time.

How does it work?

  1. Define the Data Product - When defining a Data Product, you should think about including the "interfaces" that make up your Data Products. These are the BI Report or the tables that users, APIs, or other data consumers are reading from.
  2. Monte Carlo tags all upstream tables - Once the assets are included and the Data Product is saved, Monte Carlo will automatically look upstream using lineage to tag all tables that are connected to those assets defined in the Data Product.
  3. Add a rule to monitor the tag - You can then choose to monitor all tables with that tag from the Data Product. This will ensure that all tables that relate to that Data Product are monitored by Monte Carlo.
  4. See coverage overview and same great dashboard - The Data Product provides an overview of its monitoring coverage and a dashboard to see current incidents.

See a full video walkthrough:


We've improved the sampling feature for dimension monitors to be consistent with field metrics monitors. You can now view and copy a sampling query for a dimension anomaly, as well as run the query to retrieve sampled rows directly in the UI.


In addition to the existing dbt test failures and model errors, we are now sending dbt warnings as MC incidents. Unless you have relatively noisy warnings, you should already start receiving dbt warnings the same way and same place you are getting failures for the same tests, without needing to configure anything.

To change the configuration for your warnings in MC, you can control go to Integrations -> Settings -> dbt to toggle the options. You can also opt in to group repetitive warnings into the same incident to prevent multiple alerts.


Our integration with Jira makes it easy to operationalize an incident management process. Users can choose to “push” an incident to Jira through a human-in-the-loop interaction, or they can directly send incidents to Jira through notifications. The status of those issues in Jira can then be automatically synced back to Monte Carlo, to close the loop.

Previously, only Jira Cloud was supported. But now, enterprises using Jira Data Center can integrate as well. Functionality between the Jira Cloud and Jira Data Center integrations is the same.

To learn more about our integration with Jira, and steps to integrate Jira Data Center, see our documentation.