AI Governance

At Anecdotes, we believe that AI should empower users without ever compromising trust, privacy, or security.

As we integrate AI capabilities into our platform, we follow the three foundational commitments:

  1. No Customer Data for Model Training
  2. We do not use customer data, metadata, or any other proprietary or user-generated information to train, fine-tune, or otherwise improve underlying large language models (LLMs).
  3. Use of Trusted Cloud AI Infrastructure

We leverage Google Vertex AI to enable AI functionality within our product. Vertex AI provides enterprise-grade security, privacy controls, and compliance certifications aligned with international standards.

Security by Design

AI features are architected under the same robust security, privacy, and compliance practices that govern all other parts of our platform.

Our Approach to AI

We use AI to enhance user experiences in ways that are safe, transparent, and secure. Customer interactions generate (based on their selections) structured prompts that are securely sent to Google Vertex AI. Importantly, we never send personal data, free-form text, or sensitive information.

AI features support our customers by:

  • Streamlining workflows.
  • Summarising and interpreting user-selected information.
  • Providing predictive insights where appropriate.

Protecting Your Data

  • No Training with Customer Data: Neither Anecdotes nor Google uses our customer data to train or improve AI models.
  • Data Minimisation: Only the minimum, structured information necessary is sent to AI services.
  • End-to-End Encryption: Data is encrypted during transmission and storage.

Built-In Security at Every Step

  • Secure API Connections: All AI service communications are authenticated and monitored.
  • Input and Output Controls: Prompts are pre-engineered by our research team, and the AI responses are filtered for security and quality.
  • Monitoring and Response: AI activities are continuously logged, monitored, and reviewed by our security team.

Compliance Matters

  • ISO/IEC 42001 (AI Management System) Aligned: We manage AI risks following international best practices, including impact assessments and risk mitigation frameworks.
  • NIST AI Risk Management Framework (AI RMF) Adopted: Our AI governance covers system mapping, continuous measurement, and stakeholder risk management.
  • Third-Party Risk Management: Google Vertex AI undergoes rigorous and ongoing third-party risk evaluations.

Our Commitment to Responsible AI

We operate a formal AI Governance Framework to:

  • AI-driven features are clearly disclosed and documented.
  • Ensure human oversight over AI-driven interactions.
  • Train our teams on responsible AI use.

Moving Forward Together

Our customers trust us to handle their data and AI interactions responsibly. We take that responsibility seriously, and we are committed to providing AI capabilities that are secure, ethical, and aligned with our values.

FAQs

Frequently Asked Questions

No, Anecdotes' platform does not host, train, or fine-tune AI models with customer data or any other data, our platform’s AI features are powered by Google Vertex AI.

Anecdotes does not host or train AI models with customer data or any other data, the HTTPS traffic to and from Google Vertex AI is encrypted and enforced with TLS1.2 and above between all components.

At present, the customer bucket is only storage used by the AI pipline as a source and destination. In order to maintain the completeness, integrity, accuracy, and traceability Anecdotes does not modify any customer data provided as an input within our AI pipelines.
Anecdotes do not use customer data to train AI models, and no data is leaked or exposed externally. Any output generated is stored securely within the customers’ own storage buckets, not in our systems.
Risk of non-compliance with said regulations is mitigated by not sending or using customer data to train the AI model.

The selected region is GCP US-Central1 and Anecdotes' DPA can be access at https://www.anecdotes.ai/data-processing-exhibit

We leverage Google’s Vertex AI models for key tasks like vector search, text generation, and embedding creation. While these models benefit from Google's rigorous bias evaluation processes (responsible AI practices across their model lifecycle), we recognize that bias can still manifest depending on specific use cases and data contexts. Here's how we handle it:
1. Task-Specific Bias Evaluation
Our team conducts manual reviews of model outputs during development to identify patterns of bias or poor performance that automated metrics might miss.
2. Mitigation Strategies
We implement automated post-processing to validate and adjust outputs, ensuring they are coherent, accurate, and aligned with intended use cases.
Human-in-the-loop review is integrated into our development cycle to catch and correct any unexpected behaviors or biases.
3. Customer Collaboration and Opt-In Control
Our AI features are offered only to opt-in customers, ensuring they are fully informed and aligned with the deployment.

All prompts and system instructions are human-predefined and human-in-the-loop review is integrated into our development cycle to catch and correct any unexpected behaviors or biases.

We actively assess performance using labeled data. Labeled datasets and manual labeling processes help us evaluate outputs for fairness, correctness, and reliability across diverse scenarios. Our team conducts manual reviews of model outputs during development to identify patterns of bias or poor performance that automated metrics might miss.

We implement automated post-processing to validate and adjust outputs, ensuring they are coherent, accurate, and aligned with intended use cases.
Human-in-the-loop review is integrated into our development cycle to catch and correct any unexpected behaviors or biases.

AI features offered by Anecdotes’ platform are available within the relevant module, customers can opt-in or opt-out from these AI powered features in the settings using on/off toggle.

AI powered features are opt-in add-ons so in case it fails no suggestions are presented but this does not affect the core platform or the module in which the AI feature was enabled.

All AI generated and processed data is stored within the dedicated customer storage, upon customer contact termination customer data is purged permanently.

Logging and monitoring includes but not limited to; rejection rate, accuracy drops, and feature usage.

Yes, Anecdotes has an AI Governance and Security Framework in place which aligns with ISO42001, ISO27001 and NIST AI RMF.