MC-GitLab integration is now available. Users can leverage the integration to investigate code impact on tables. More details here https://docs.getmontecarlo.com/docs/gitlab


MC-GitLab integration is now available. Users can leverage the integration to investigate code impact on tables. More details here https://docs.getmontecarlo.com/docs/gitlab
On the Assets page, users interact with our out-of-the-box monitors for Freshness and Volume. It's also where they can configure an explicit threshold, if they prefer that instead of the machine learning.
If a user adjusts sensitivity or switches to an explicit threshold, those changes are now captured in a "Change log" that they can easily view. This makes it easy to see who adjusted the settings, when, and how. This has been a well adopted feature in custom monitors, so we are glad to extend it here as well.
What's new?
Why this change?
We're committed to continuously improving your Monte Carlo experience. This update prioritizes efficiency and actionability, ensuring you can quickly address data issues and gain insights from your data observability platform.
As always, we welcome your feedback! Please don't hesitate to reach out to your customer success manager or support team with any questions or comments.
We’re thrilled to share that Data Profiler (formerly Data Explorer) is now generally available for Snowflake, Databricks, BigQuery, and Redshift. With Monte Carlo’s data profiling and monitor recommendation capabilities, data practitioners can quickly and effectively set up data quality monitoring:
We've expanded the functionality and simplified the experience of managing monitors as code:
alert_conditions
and schedule
fieldsresource
is now warehouse
comparisons
is now alert_conditions
notify_rule_run_failure
is now notify_run_failure
We've released an updated layout for the notifications from Metric Monitor. In the last 6 months, we’ve dramatically expanded the functionality of this monitor. The layout for its notifications were not coping well with all the new functionality, so it was due for a refresh.
We’ve paid special attention to:
code
formatMonte Carlo now supports categorizing and reporting directly on data quality dimensions!
These features offer more granular control and insight into data quality, improving data management efficiency! Learn more about data quality dimensions in the docs.
We've rebuilt the Metric Monitor to address many key points of customer feedback! Improvements includes:
Note – to support these changes, the backend and data model needed significant refactoring. So much so that this a hard “v2” of the Metric Monitor. Existing Metric Monitors will continue to function without interruption, but only newly created monitors will get these improvements.
We encourage users to try it out! There are some slight differences in the configuration of a v2 Metric Monitor, compared to before. For example:
Refer to our documentation on the Metric Monitor and available metrics for more details.
An example of the thresholds from a new metric monitor. Note that they 'hug' the trendline much better and incorporate week-over-week patterns in the data.
A Databricks Workflows integration is now available, which allows users to identify which Databricks job updated a given table, what are its recent run results, and how workflow runs impact the quality of a table. Very soon it will also alert on workflow failures.
Permissions to job system tables are required to enable the integration. More details here.
We've made an update to the permissions for access to the Usage UI.
This change aims to streamline permissions and ensure that Editors have access to make changes to what is ingested and monitored by Monte Carlo.
For more on Managed Roles and Permissions, refer to https://docs.getmontecarlo.com/docs/authorization#managed-roles-and-groups
Learn more about the Usage UI here: https://docs.getmontecarlo.com/docs/usage#access