AI Privacy
Monte Carlo’s use of AI is designed to maintain privacy, security, and customer control over data. AI features within the platform rely primarily on patterns from metadata, configuration information, and system-level signals to assist with anomaly detection, root cause analysis, and recommendations. At no point does Monte Carlo’s AI training process involve accessing, storing, or using raw customer data.
The guiding principle is that customers remain in full control of their data. Any AI-driven analysis operates strictly within the scope of what customers configure for monitoring, and the outputs are never shared across organizations.
Key Privacy Practices
Customer-specific boundaries
AI functionality applies only to a customer’s own environment. Each organization’s monitored data sources, assets, and associated signals are isolated from other customers.
No training on customer data content
AI models are not trained using customer data values. Instead, they focus on patterns of anomalies and operational signals at an abstract level.
Secure data handling
Data used for AI-driven monitoring remains subject to the same encryption, permissions, and controls applied to all other aspects of the Monte Carlo platform.
Transparency and control
Customers configure what assets are monitored and have visibility into how AI is applied to those assets.
Updated about 6 hours ago