The section in Settings formerly called "Usage" has been renamed "Table Monitors". Users go here to manage which assets are ingested and which have table monitors applied.

This is a cosmetic change and does not impact any underlying functionality.


Users can now apply tags to ETL jobs (Airflow DAGs, Databricks workflows, Azure Data Factory pipelines, dbt jobs), similar to how they add tags for tables today. MC also automatically imports external job tags from Airflow DAGs, Databricks workflows, Azure Data Factory into MC. For Airflow tags import, python package airflow-mcd 0.3.6 or above is required
This gives users flexibility to organize their job assets same way as tables, such as bulk adding jobs to audience, domains using tags.

We are removing the old "API" section from the Settings page. "API keys" is now under "Data and integrations" section, and "API explorer" is now under user menu.

We've implementing some changes around how an Asset looks when it is not currently enabled for monitoring by Monte Carlo. Users now will be able to access more card on the Summary page, access the Monitors tab as well as run the Data Profiler for that table.

In addition, we've added other helpful way to get unmonitored tables enabled for monitoring. If you have access to the Usage UI, clicking "Enable" on an unmonitored table will provide a list of suggested ways you could get a table enabled for monitoring. Selecting one of these options will automatically add a rule in the Usage UI and enable the table for monitoring.

We've dramatically improved the anomaly detection for "Time since last row count change" in Metric Monitors. This enables customers to do more sensitive freshness monitoring for highly segmented use cases.

The model now produces more sensitive thresholds, and generates thresholds faster as well.

For example, this would be very helpful if you have multiple versions of an app, each sending events back to the same table, and you wanted to be alerted if any version went abnormally long without sending any events.

We've improved our rule in the Usage UI to let users include/exclude tables by last activity. Now, users can specify "read", "write" or "read or write" activity and select from 7 to 31 days for lookback.

As an example, this rule could be used to include all tables from a certain schema with recently changed data (recent write activity).

For more details on the Usage UI, refer our docs

The settings view now has a new look! The previous tabular layout is replaced with tiles, making the UI more intuitive and easier to navigate. No functionality changes.

Following up on our recent improvements to Volume alerts, we've released a similar set of improvements for managing Freshness alerts (time since last update & time since last row count change). Same as the Volume release, we’re rolling this out gradually over the next few weeks. It is currently available to just a subset of customers.

Specific improvements:

  • Freshness thresholds don't automatically widen after an anomaly. Instead, anomalies are excluded by default from the set of training data. If they don’t want to be alerted to similar anomalies in the future, users can ‘Mark as normal’. This will re-introduce the anomalous data point to the training set and widen the threshold.
  • Users can 'select training data' directly in the freshness chart. This gives the user full control over which data is used to train the model, without needing to navigate to Settings to create Exclusion Windows.

With this change, all anomaly detection in Table monitors (Freshness & Volume) now behaves the same way. Anomalies are excluded anomalies by default from the data set that trains thresholds. And users can then choose to add them back in if they’d like to widen the threshold.

Read more about interacting with our anomaly detection.


Anomalies are now excluded from the set of training data by default, so that thresholds don't widen.

If the user does not want to be alerted to similar anomalies in the future, they can "Mark as normal" to re-introduce the anomaly to the set of training data.

Anomalies are now excluded from the set of training data by default, so that thresholds don't widen.

If the user does not want to be alerted to similar anomalies in the future, they can "Mark as normal" to re-introduce the anomaly to the set of training data.


Easily exclude periods of undesirable behavior from the set of training data.

Easily exclude periods of undesirable behavior from the set of training data.


In addition to the existing workflow of sending alerts to ServiceNow via audiences, users can now minimize alert noise within their ServiceNow environments by manually reviewing alerts before forwarding them or linking alerts to existing ServiceNow incidents.