Ensuring Your Data is Secure with AI Language Models
Last Modified: October 1, 2024
Ensuring Your Data is Secure with AI Language Models
As we integrate advanced Large Language Models (LLMs), like Google Gemini, into our services, we understand your primary concern: Can these systems be trusted with my data? Our commitment is to prioritize your data privacy and security while leveraging the best technologies to serve your needs.
This page outlines how we handle data when using LLMs, starting with general best practices for all models and then diving into specific practices for Google Gemini. We aim to provide clarity, transparency, and assurance.
Our Commitment to Data Privacy Across All LLMs
1. General Practices for AI Data Protection
Your Data is Yours:
We ensure that data you share remains under your control. No data is used to train or improve the AI models unless explicitly permitted.Clear and Transparent Use:
We clearly define how data is processed, stored, and retained. Inputs such as text, files, or prompts are handled only for the purpose of fulfilling the task.Minimal Retention Policy:
Any data processed by an LLM is cached only temporarily to complete your requests and is not stored permanently unless explicitly required.Security First:
We apply industry-leading security measures, including encryption, access controls, and regular audits, to protect data during processing.
2. Adherence to Global Standards
We work with AI providers that meet rigorous compliance requirements, such as SOC 2, GDPR, and ISO 27001. This ensures that data handling aligns with international security and privacy standards.
3. Transparency in Partnerships
When we use third-party models like Google Gemini or others, we disclose their role in data processing and provide details about their specific policies.
How We Handle Data With Google Gemini
Specific Practices for Google Gemini LLM
Google Gemini is one of the LLMs we use to provide advanced AI capabilities. Google’s policies on data handling reflect its dedication to privacy and security:
Data is Not Used to Train AI Without Permission:
Google guarantees that your data will not be used to train or fine-tune AI models unless you explicitly agree.Strict Data Processing Terms:
Customer prompts, files, and outputs are processed only for the duration of your session.
Data will not be used to improve Google’s products in paid services.
Global Data Protection:
Data may be cached or processed in any country where Google operates, but only under strict security and privacy controls.Compliance with Google's Cloud Data Processing Addendum:
Google maintains technical, organizational, and physical measures to ensure customer data is safeguarded at every step.
For more details on Google’s policies, see their Cloud Data Processing Addendum.
Future Integration of Other AI Models
As we continue to expand our AI offerings, we may integrate other LLMs to deliver the best results for you. Regardless of the provider, our commitment remains:
Consistent Standards:
We will always prioritize secure, transparent, and ethical data use.Provider-Specific Clarity:
For each AI provider, we will outline their specific data handling policies, ensuring you know exactly how your data is managed.Customer-First Approach:
No matter the model, your data will only be used for what you agree to. Trust and transparency will guide our AI integrations.