Our AI-powered Monitoring agent is now integrated directly into the metric and validation monitor wizards. The Monitoring agent analyzes sample data from your tables to intelligently recommend the most relevant monitors, including multi-column validations, segmented metric monitors, and regular expression validations. This integration brings the same smart recommendations that were previously available in Data profiler directly into the monitor creation workflow, making it faster and easier to set up comprehensive data quality monitoring with AI-suggested configurations tailored to your data.

MC released a number of UI updates to the Integrations Settings view -

  1. Users can now rename integrations and connections easily via the UI.
  2. Related connections are grouped into one integration. An integration is a collection of connections that share the same warehouse/metadata. i.e. a Databricks integration may have a metadata connection and multiple query connections that allow users to run different monitors via each query connection.
  3. Each integration will have an integration details view (some will be added in coming weeks).
  4. Brand new UI for browsing new integration to add.

Check out the video walkthrough here.

Similar to how Data Products apply a tag to the Assets they include, Domains now do the same.

Our goal is to improve the filtering experience across Monte Carlo. When users want to slice/dice/target the right tables on Dashboards, Table Monitors, Alert feed, and more, they often need some combination of Data Products, Domains, and Tags. Going forward, we'll invest in Asset Tags as the interoperable layer for all these different filter concepts.

Last year, we improved freshness monitors to handle weekly patterns much better. This is useful if your tables update frequently during the week, but not on the weekends.

Now, we support the same for daily patterns. If your tables update frequently during the day and infrequently at night, the threshold will adjust to be much tighter during the day and much wider at night.

This is particularly common in financial services, where many data pipelines update frequently during market hours but not at night or during the weekend.

In this table, updates happen hourly during the day, and not at all during the night. With this new release, the threshold during the day is now much tighter -- just over an hour.

In this table, updates happen hourly during the day, and not at all during the night. With this new release, the threshold during the day is now much tighter -- just over an hour.

Users with the Viewer role can now access data profiling and monitor recommendations, making it easier for more team members to analyze and understand their tables. Note: the Editor role is still required in order to create monitors.

Metric Investigation helps you trace anomalies detected by Monte Carlo metric monitors back to their potential root causes — faster and with more context. With this feature, you can sample and segment rows related to a metric alert, giving you immediate visibility into the underlying data that triggered an alert.

We've also brought sampling capabilities for all metrics, rather than just the existing subset of metrics that were previously supported.

Learn more in the Metric investigation documentation.

Data profiling and Monitoring agent recommendations are now available for Teradata and SQL Server in addition to previously supported warehouses (Snowflake, Databricks, BigQuery, and Redshift).

Previously, metric monitors required you to ignore the current hour or day's worth of data. This helped to ensure that the data was mature and ready for anomaly detection before the monitor sampled it. However, this sometimes led to unnecessarily long time-to-detection for issues. For example, if today's data loaded at 7am, this release makes it possible to check that data shortly thereafter (i.e. 7:30am) instead of waiting until the following day.

We've added a Get alerts button on the Assets tab, to help users better understand where alerts on an asset are being sent.

Getting alerts in Slack, Teams, and other channels is one of the main ways users stay on top of data issues and see value from Monte Carlo. But for new users, it can be challenging to understand where alerts are being sent. The Get alerts button makes it much easier to see where notifications for that assets are being sent, so they can join those channels and stay in the loop.

In the coming months, we'll make joining relevant channels a central part of a more structured, in-product user onboarding flow. For now, this is a great start.

**Get alerts** is on the top right of an Asset page

Get alerts is on the top right of an Asset page


It shows where alerts for that asset are being sent, so users can join those channels

It shows where alerts for that asset are being sent, so users can join those channels

Users can now Enable monitoring for assets, directly within the experience of making a Metric, Validation, Custom SQL, Comparison, or JSON Schema monitor.

Depending on your contract structure with Monte Carlo, having monitoring enabled for an asset may be a pre-condition for applying any other kinds of monitors (such as Metrics or Validations). Previously, users would need to navigate to the Asset page or Settings to turn on monitoring, and this improvement ensures they can turn on monitoring without needing to navigate away.

If creating a monitor on an unmonitored asset, users now can enable monitoring without needing to navigate to Assets or Settings.

If creating a monitor on an unmonitored asset, users now can enable monitoring without needing to navigate to Assets or Settings.