We've expanded the functionality and simplified the experience of managing monitors as code:

  • Added a button to generate monitor YAML configuration from each monitor's UI creation flow
  • Added support for Data Quality Dimensions
  • Added support for Query Performance Monitors
  • Added default values for alert_conditions and schedule fields
  • Renamed fields to be consistent with the UI (backwards compatible to prevent breaking existing configuration)
    • resource is now warehouse
    • comparisons is now alert_conditions
    • notify_rule_run_failure is now notify_run_failure
  • Updated our documentation: https://docs.getmontecarlo.com/docs/monitors-as-code

We've released an updated layout for the notifications from Metric Monitor. In the last 6 months, we’ve dramatically expanded the functionality of this monitor. The layout for its notifications were not coping well with all the new functionality, so it was due for a refresh.

We’ve paid special attention to:

  • Simplicity of language
  • Consistent use of code format
  • Consistency across different recipient types (e.g. not just optimizing for Slack)
  • Both mobile and desktop experience

Monte Carlo now supports categorizing and reporting directly on data quality dimensions!

  • Data quality dimensions in monitor creation: Assign data quality dimensions directly when creating or editing monitors. Monitors as code and API are of course supported as well.
  • Dimension scores on data quality dashboard: View data quality scores by dimension (Accuracy, Completeness, Consistency, Timeliness, Validity, Uniqueness).
  • Alert categorization: Identify and organize alerting monitors by data quality dimension.
  • Bulk Actions: Apply tags or assign data quality dimensions to multiple monitors simultaneously for streamlined organization.

These features offer more granular control and insight into data quality, improving data management efficiency! Learn more about data quality dimensions in the docs.

We've rebuilt the Metric Monitor to address many key points of customer feedback! Improvements includes:

  • “Seasonal” pattern detection, with special attention to day-over-day and week-over-week patterns.
  • Tighter thresholds that ‘hug’ the trend line
  • ML thresholds available on all metrics
  • Rapidly trained thresholds by backfilling up to 1 year of historical metrics (learn more). Thresholds often appear after the first run of the monitor + backfill is complete
  • Requirement for minimum row count per bucket is removed

Note – to support these changes, the backend and data model needed significant refactoring. So much so that this a hard “v2” of the Metric Monitor. Existing Metric Monitors will continue to function without interruption, but only newly created monitors will get these improvements.

We encourage users to try it out! There are some slight differences in the configuration of a v2 Metric Monitor, compared to before. For example:

  • Users are now asked how many days or hours of recent data should be ignored, since it might not be mature yet for anomaly detection.
  • To support ingest of much more historical data, segmented metric monitors are now limited to a single alert condition, which can contain only 1 metric and 1 field.

Refer to our documentation on the Metric Monitor and available metrics for more details.


An example of the thresholds from a new metric monitor. Note that they 'hug' the trendline much better and incorporate week-over-week patterns in the data.

An example of the thresholds from a new metric monitor. Note that they 'hug' the trendline much better and incorporate week-over-week patterns in the data.

A Databricks Workflow integration is now available, which allows users to identify which Databricks job updated a given table, what are its recent run results, and how workflow runs impact the quality of a table. Very soon it will also alert on workflow failures.
Permissions to job system tables are required to enable the integration. More details here.

We've made an update to the permissions for access to the Usage UI.

  • Editors will now have access to Add/Edit/Delete Ingestion and Monitoring Rules in the Usage UI.
  • Account Owners and Domain Managers will retain permissions to Add/Edit/Delete Ingestion and Monitoring Rules in the Usage UI

This change aims to streamline permissions and ensure that Editors have access to make changes to what is ingested and monitored by Monte Carlo.

For more on Managed Roles and Permissions, refer to https://docs.getmontecarlo.com/docs/authorization#managed-roles-and-groups

Learn more about the Usage UI here: https://docs.getmontecarlo.com/docs/usage#access

A beta version of Azure Data Factory integration is now available. The initial version generates cross-database lineage, lets users check which ADF pipeline updated which table, and shows run details for each pipeline, to accelerate data incident resolution. In a few weeks, we will also let users centrally manage ADF pipeline failures in MC.

More details on the integration, such as setup instructions, in docs here: https://docs.getmontecarlo.com/docs/azure-data-factory

When investigating data alerts, understanding the potential impact of changes in data, system, and code is critical. The new pull request correlation feature simplifies the code investigation process by allowing you to quickly correlate any pull request from your git repositories with Monte Carlo alerts.

Pull requests that have been merged into your Git repositories are visually overlaid on alert charts. This enables you to easily determine timing on whether a specific pull request may have influenced the change you are investigating.

You can further investigate by navigating to the pull requests tab, where you can filter and search through pull requests based on repository, author, PR title, or even file names. This helps you pinpoint whether a particular pull request drove the alert you are investigating. If you need more detail, you can click directly on the GitHub link to examine the code changes.

In beta, this expanded support for pull requests is only available on freshness and volume alerts. Learn more with the GitHub docs for setup and pull requests (beta) for more information on using this functionality.

We’re rolling out an important update today to help you better manage and report on incidents. Now, when marking incidents, you’ll have the option to either create a new incident or merge alerts into an existing one. This change is designed to improve your workflow and ensure more accurate incident tracking on the data operations dashboard.

Sometimes, alerts from different monitors point to the same root cause. By merging related alerts into a single incident, you can group them together for clearer insights and more precise reporting over time. This complements the existing capability of splitting alerts.

Additionally, you can now mark alerts as incidents directly from the alert feed. Learn more about marking alerts as incidents.