AI Security & Governance
Overview
To ensure AI-powered features are safe, secure, and privacy-respecting, we have implemented industry-standard security controls aligned with frameworks such as NIST and ISO 42001.
Our AI features follow all previously stated security controls as outlined here.
Governance & Risk Management
Monte Carlo maintains a documented inventory of all third-party AI models integrated into our solutions, including use-case purpose, data flow, and vendor details.
Governance procedures assign clear accountability for AI security and privacy at every stage—implementation, operation, and incident response.
AI projects are guided by established governance frameworks. We set clear accountability for AI development and deployment, including formal review of new features by technical, security, and compliance teams before launch. Policies are regularly updated to incorporate emerging best practices and legal requirements.
Model Development and Evaluation
We build AI solutions starting with carefully defined business objectives and use only data sources vetted for quality and relevance. Model performance is independently validated with robust testing for accuracy, reliability, and resilience against adversarial scenarios. We monitor for potential bias and address findings before features reach production.
Transparency and Explainability
We strive to ensure that AI system behavior is understandable and interpretable. Users are informed where and how AI is used in the platform, and explanations are provided for key outcomes where practical. Documentation of model logic and training processes is maintained for internal review and audit.
Data Protection & Privacy
Monte Carlo encrypts data at rest and in transit before any interaction with external AI services.
Only the minimum necessary data is sent for processing, with sensitive identifiers and confidential records anonymized wherever possible.
Monte Carlo does not use customer data or AI prompts to train external models; contractual restrictions with AI partners enforce this.
All data used for AI model development and operations is processed in secure, encrypted environments. Personal and sensitive information is protected throughout, in compliance with data privacy regulations such as GDPR and the EU-U.S. Data Privacy Framework. Access to training and operational data is tightly restricted and logged.
Data Isolation
Strong isolation ensures that AI pipeline metadata from one customer is never co-mingled with another’s.
Customer data remains the property of the customer; Monte Carlo only accesses data as needed for observability.
Access Controls
Role-based and attribute-based access policies restrict who can interact with AI features and underlying data.
Multi-factor authentication (MFA) is required for both administrative and operational access to AI systems and model APIs.
All model access and decision logs are monitored.
Secure Model Interaction
Prompt and input validation is enforced for all AI features to prevent prompt injection, adversarial manipulation, and data leakage.
API keys and credentials are scoped per environment and rotated regularly, with secrets management tools ensuring safe storage and access.
Monitoring & Incident Response
Monte Carlo continuously monitors all AI model activity, API calls, and outbound data for unusual behavior or security events.
Automated anomaly detection flags suspicious usage for investigation by our security team.
Monte Carlo's incident response plan includes procedures for assessing, containing, and notifying on breaches involving third-party AI vendors.
Once deployed, AI models are subject to continuous monitoring for drift, unexpected outcomes, and vulnerabilities. Automated alerts and manual review processes are in place to catch anomalies early. Incident response procedures include rapid rollback and root-cause analysis. Updates and patches are applied promptly to address new risks.
Ongoing Improvement
We regularly review and update our AI risk management protocols in response to advancements in technology, regulatory changes, and customer feedback. Audit logs, performance metrics, and user reports are leveraged to identify and mitigate emerging risks.
Compliance
All model integrations comply with data protection regulations such as GDPR, and CCPA, as applicable to customer geography and sector.
External security audits and penetration tests are conducted annually, and findings are remediated within published SLAs.
Customers can request reports of AI system interactions and data handling for review and compliance.
Subprocessors and Model Vendors
Cloud infrastructure providers may be used to process and store AI pipeline data, with strict Data Protection Agreements (DPAs) in place.
Subprocessors do not use AI pipeline metadata for their own model training or service improvement.
Updated about 6 hours ago