Comparison Rules now support segmentation. Full details in the docs. Previously, users could do comparisons like:

  • Count of rows: to compare counts, like the count of yesterday's orders
  • Single value: to compare metrics, like the sum of revenue from yesterday's orders

Now, you can also do:

  • Segmented values: to compare segmented metrics, like the sum of revenue by product from yesterday's orders (limited to 100 segments)

Today we are announcing Monte Carlo's foray into data observability for Azure Synapse, starting with the ability to run SQL Rules and detect Volume anomalies. SQL Rules can be created in either the UI and/or programmatically via Monitors-as-Code (API/SDK too). These monitors can be used to generate notifications to relevant stakeholders, circuit break pipelines, and conduct RCA (e.g. sampling). Asset entries are also created in Monte Carlo and Custom Lineage APIs are available to write lineage across databases to data warehouses.

Today, Azure Synapse Dedicated SQL Pools (formerly SQL DW) are supported, but we are looking to expand support. If you require additional Azure support, please fill out the form here.

Users can now create rules that run 2 separate queries pointed at different databases (or the same database), and compare the results. The most common use case for this is to ensure that the sync of data from a source database to a different target database has been successful (e.g. SQL Server to Snowflake).

Create these by selecting Comparison Rules in the monitor menu, or click here.

More detail available in our docs.

With GetAccountAuditLog via the API, you can now retrieve granular user activity logs from your MC environment. This feature is designed specifically to address requirements from Security & IT teams that want the ability to extract and retain these logs for SEIM (Security Event & incident Management) purposes.

From the /monitors tab, users can now select multiple monitors at once and then apply an audience to them. This simplifies the process of adding a new audience to many monitors, making it easier for you to route notifications to the right place.

Today, we are releasing Performance Monitors to the Monte Carlo UI, which has been one of the most popular requests since the Performance Dashboard's creation! With this feature you'll be able to set up monitors that can solve use cases such as:

  • Getting alerted if any query from a warehouse/user/database has a runtime that is X% longer than normal
  • Getting alerted if any query from a warehouse/user/database has a runtime that is longer than an absolute threshold
  • Getting alerted if an important query failed
  • And many more!

If a user is making a change to a custom monitor that might significantly alert its results, the user is now prompted if they want to retrain their anomaly detection models or not. Previously, Monte Carlo would automatically begin a retrain, and there were some cases where this was not desirable (e.g. a small change in the filter).

Users are only prompted with this modal if they are making a significant change to the SQL, the schedule, segmentation, editing variables, etc. Editing a field like notes, description, or audience will not prompt this modal.

Today we are enabling the ability to store the Monte Carlo sampling data in your cloud Azure. This is a new deployment option to store the Object Storage bucket directly in your cloud environment. This is in addition to already being available for both for AWS (S3) and Google Cloud (GCS).

More information on Deployment Options is now available in the documentation.

Today we are announcing Monte Carlo's foray into data observability for Oracle, starting with the ability to run SQL Rules and detect Volume anomalies. SQL Rules can be created in either the UI and/or programmatically via Monitors-as-Code (API/SDK too). These monitors can be used to generate notifications to relevant stakeholders, circuit break pipelines, and conduct RCA (e.g. sampling). Asset entries are also created in Monte Carlo and Custom Lineage APIs are available to write lineage across databases to data warehouses.