For dbt test failures and warnings, we now added in the incident details view
i) the compiled SQL code,
ii) the ability to run that SQL and return the rows,
iii) the ability to edit those queries for ad hoc investigations.

For dbt test failures and warnings, we now added in the incident details view
i) the compiled SQL code,
ii) the ability to run that SQL and return the rows,
iii) the ability to edit those queries for ad hoc investigations.
Metric monitors now allow users to define a custom metric using SQL. This allows users to create a metric that:
APPROX_PERCENTILE(order_amount, 0.99)
AVG(price + tax_collected)
Currently, only manual thresholds are supported for custom metrics. See our docs to learn more.
Duplicate (count)
is now available within Metric monitors. This metric is a useful alternative to the Unique (%)
metric, particularly if you are looking to ensure a column is always 100% unique. On large tables, a small number of duplicates can cause 100% unique to become 99.99999% unique. Which can be hard to visualize in charts and can sometimes be rounded off.
Duplicate (count)
is a much easier way to visualize uniqueness. Since if a duplicate emerges, it goes from 0 to 1 instead of 100% to 99.99999% unique.
Duplicate (count)
is calculated as the difference between the total number of rows with non-null values and total number of distinct non-null values.
Check out our full list of available metrics.
Automated freshness and volume monitoring are now available for SAP HANA. See docs for more details.
We have revamped the account onboarding to streamline getting started with data observability for your organization. Think of it as a fresh coat of paint, but the engine got a big tune up too.
Monte Carlo has identified an issue with custom monitors that use Dynamic
schedules on VIEW table types. Currently, Dynamic
schedules require freshness information to function correctly, which isn't available for VIEW tables.
As a result, these monitors have defaulted to running every 48 hours. To ensure consistent monitoring and to avoid future issues, we will automatically update all affected monitors to fixed schedules with a 48-hour interval starting yesterday, April 24th. Additionally, we will implement a new validation rule to prevent Dynamic
schedules from being applied to VIEW tables.
fixed
schedule due to the new validation rule. If you attempt to apply without having changed any of these monitors, it will still work! However, once one of these monitors are edited, if the schedule is not also modified, the apply will throw an error. We recommend updating your monitors to fixed schedules to avoid disruption.If you have any questions about this change or would like a list of monitors impacted by this change for your account, please reach out to our support team at [email protected].
This can be used in situations like “Alert if the mean of sale_price is not between 500 and 1,000.”
The behavior here is not inclusive, meaning the above example could also be read as “Alert me if the mean of sale_price is <500 or >1000.”
When using parameterized values in SQL Rules, you can now pass values from up to 5 fields in sequence. The previous limit was 3. This allows users to include more context about breached rows in their notifications than they could before.
To make it simpler to create Metric monitors, we've shipped the option to "Recommend configuration". When the button is clicked, Monte Carlo will run some queries on the table in order to recommend settings for the Aggregation and Set schedule sections of the monitor. Configuring these sections properly is important in order for the monitor to do effective anomaly detection.
To learn more, see our documentation.
Defining a Data Product now tags all assets and their upstream tables with a tag in Monte Carlo. This tag can be monitored in the Usage UI to ensure monitoring coverage for that Data Product.
This provides a better way to define these "use cases", "workloads", "workflows", etc. that deliver some collection of trusted data. Once a Data Product is defined, all related upstream tables are automatically tagged by Monte Carlo and can be monitored in the Usage UI, ensuring full monitoring coverage for the Data Product. This allows you to focus on monitoring these larger "workloads" that are important to your business, vs having to hunt around for individual tables to include in monitoring one-at-a-time.
How does it work?
See a full video walkthrough: