ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
Service Category Data regions on free and consumer plans Enterprise data regions GDPR on Free and Consumer Plans GDPR compliance level in enterprise plans Ability to train models on user data in free and consumer plans Possibility to train models on user data in enterprise plans Transfer of copyright to the output and retention of copyright to the input in free and consumer plans Transferring copyright to the output and retaining copyright to the input in enterprise plans Liability for output errors in free and consumer plans Responsibility for output errors in enterprise plans Important notes and additional terms and conditions Legal terms - official link
2
ChatGPT (OpenAI) Generic AI ProvidersIndividual user data is processed on OpenAI servers in various jurisdictions, including the United States; there is no publicly guaranteed EU data residency for consumer accounts. At Rest data residency is available for eligible enterprise customers (including ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and API Platform) to choose from the US, Europe, the UK, and several other countries. For APIs and some services, you can also choose your processing region (US or Europe). Consumer accounts are managed in accordance with OpenAI's general Privacy Policy. Separate terms (EU/EEA/UK terms) in OpenAI's documents apply to users in the EEA, Switzerland, and the UK. For Enterprise customers, OpenAI offers contractual commitments (Data Processing Addendums – DPAs) and data residency options and other provisions aligned with GDPR requirements.OpenAI reserves the right to use the content to maintain, develop, and improve models. Consumer users have the option to disable the use of their own conversations for model training. ChatGPT Enterprise has a standard provision that client content is not used to train generic models. To the extent permitted by applicable law, you retain ownership of the Input and acquire rights to the Output. The Client retains the rights to its data and Output. The Service is provided "AS IS"; OpenAI limits its liability in contract/indemnity (the liability limit is the greater of the amount paid for the Service in the 12 months preceding the claim or $100). You accept the risk of the Output being inaccurate.Each party's liability is limited to the total amount of fees paid by the customer to OpenAI in the 12 months preceding the event giving rise to liability, subject to certain exceptions (including, but not limited to, gross negligence, contractual indemnity obligations, customer payment obligations).1) Individual accounts (e.g., free/Plus) can be used for model training; a "do not train on my content" option and Temporary Chat mode are available, as described in "How your data is used to improve model performance." 2) For business products (ChatGPT Business, ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and the API), the policy is that customer data (input and output) is not used for model training unless the organization explicitly consents. 3) For eligible ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and API customers, the following mechanisms are available: data residency in regions specified by OpenAI, retention configuration (including zero data retention for the API), and a DPA linked to the OpenAI Services Agreement. Terms of Use (ROW): https://openai.com/policies/row-terms-of-use/ - Terms of Use (EU): https://openai.com/policies/eu-terms-of-use/ - Privacy Policy (landing, with reference to the EU/EEA/UK version): https://openai.com/policies/privacy-policy/ - How your data is used to improve model performance: https://openai.com/policies/how-your-data-is-used-to-improve-model-performance/ - Enterprise Privacy: https://openai.com/enterprise-privacy/ - Business data privacy, security, and compliance (including data residency): https://openai.com/business-data/ - OpenAI Services Agreement (business terms for API, ChatGPT Business/Enterprise): https://openai.com/policies/services-agreement/ - Data Processing Addendum (API Services, ChatGPT Enterprise): https://openai.com/policies/data-processing-addendum/
3
Gemini (Google) Generic AI Providers Global/multi-region (Google consumer services are hosted globally; no guaranteed enterprise data residency for free accounts).Data residency available (including but not limited to: US, EU, Australia, Canada, India) – possible region/residency choices for enterprise client. Consumer services are subject to Google's general Privacy Policy and EU regulations; however, the same guarantees and DPAs/Controller-Controller Agreements are not available to enterprise customers. Google offers Data Processing Addendum and Controller Data Protection Terms for enterprise customers; data residency options in the EU and tools to support GDPR compliance. With Gemini Apps Activity/Keep Activity enabled, prompts, files, and responses can be used to develop services and train Google's generative models; with Keep Activity disabled, future chats are not used to train models, except for the content provided as feedback.Gemini Enterprise – Business Edition (including trial period) – Your content from Paid Services and the Trial Period is not used to improve Google products. Gemini Enterprise – Starter Edition – User data is used to improve the service and train Google ML technologies, with the ability to opt out in settings. You retain copyright in your input and output and grant Google a non-exclusive, worldwide, royalty-free license to operate and improve the Services, as set out in the "Your content" and "Permission to use your content" sections of the Google ToS. Customer/organization retains copyright in the content (input and output) on Gemini services covered by the Google Terms of Service and additional terms; Google does not acquire ownership of this content, but only a license as described in the ToS, including when the service is used in a business context.The services covered by the Google Terms of Service are provided "as is," without any implied warranties (including fitness for a particular purpose) and with contractual limitations of liability; Google expressly advises that you should not rely on the services (including AI) as a source of medical, legal, financial, or other advice, and that you are solely responsible for any use of the content. 1) Business Edition (Paid Services and Trial Period) Google does not use Your Content to improve its products or train models. 2) Starter Edition data is used to improve the service and train Google's ML technology, but users can opt out of this in their settings. 3) Starter Edition comes with an explicit disclaimer not to transmit sensitive, confidential, or personal data.1) Gemini Enterprise – Business Edition (paid edition) – Google does NOT use customer content to train models; 2) Gemini Enterprise – Starter Edition and some trial/free versions may allow you to use your uploaded content to improve services (for Starter Edition: do not upload sensitive data); 3) Data residency and region availability are available to enterprise customers through the Google Cloud console; 4) Always check applicable DPAs (Controller-Controller Data Protection Terms) and privacy policies before uploading sensitive data; 5) Legal terms and practices may change – refer to Google Cloud official documents for updates.Recommended set of main legal sources for this service (several links can be listed in a cell): - Gemini Enterprise – Business Edition Additional Terms: https://cloud.google.com/terms/gemini-enterprise/business - Gemini Apps Privacy Hub (consumer Gemini): https://support.google.com/gemini/answer/13594961 - Google Terms of Service: https://policies.google.com/terms - Google Privacy Policy: https://policies.google.com/privacy - Cloud Data Processing Addendum (DPA): https://cloud.google.com/terms/data-processing-addendum - Google Controller‑Controller Data Protection Terms: https://business.safety.google/controllerterms/ - Gemini Enterprise locations (data residence): https://cloud.google.com/gemini/enterprise/docs/locations
4
Claude (Anthropic) Generic AI ProvidersThere is no public, precise list of processing regions for the Free/Pro/Max plans. The privacy policy indicates that data may be transferred to servers in the US or other countries outside the EEA/UK; the data controller for users in the EEA/UK/CH is Anthropic Ireland Limited, while the contracting party under the Consumer Terms remains Anthropic PBC. Customer data may be processed in select countries in the US, Europe, Asia, and Australia, and is stored in US data centers by default. Regional Compliance describes regional residency and processing options (Europe, US, Canada, APAC regions) primarily available through AWS Bedrock, GCP Vertex, and Microsoft Foundry. For EEA/CH consumers, the GDPR (Anthropic Ireland Limited) applies. The Consumer Terms and Privacy Policy describe data processing and individual rights (access, rectification, deletion, etc.). There is no extended DPA for consumer accounts.DPA available, possibility to use Standard Contractual Clauses (SCC) or other data transfer mechanisms, data residency options and contractual obligations regarding retention and processing. Materials (Inputs/Outputs) may be used for training and model improvement purposes unless the user opts out of training in their account settings. Anthropic reserves the right to use materials for training in two exceptional circumstances: (1) when the user provides feedback, and (2) when the materials are flagged for security review.Commercial Terms explicitly prohibit training models on Customer Content (Inputs/Outputs) processed within these services. Zero Data Retention is available only to certain API/Commercial Organization customers under separate agreements and applies only to eligible API endpoints and products using a Commercial Organization key (including Claude Code); it does not apply to Claude for Work or consumer plans by default. Under the Consumer Terms, Anthropic assigns to you all rights, titles and interests (if any) that Anthropic may have in the Outputs – in other words, you receive the rights to the generated content, subject to third party rights. The Commercial Terms state that the customer retains rights to the Inputs and is the owner of the Outputs, and Anthropic waives its rights to the Customer Content and assigns all of its rights to the Outputs to the customer.The Consumer Terms provide a full disclaimer of warranties (“as is”) and limit Anthropic’s aggregate liability to the greater of the amount paid for access/services in the preceding 6 months or $100. The Commercial Terms provide for an exclusion of liability for indirect and special damages and a limit on each party's aggregate liability to the amount of fees paid for the Services during the preceding 12 months; an exception is made for indemnity obligations (indemnification), to which the limit does not apply. Consumer plans (Claude Free/Pro/Max): Ability to opt out of using Materials (Inputs/Outputs) for model training, with exceptions for Feedback and materials marked for security review (Consumer Terms, "Our use of Materials" section; Privacy Policy, model training section). - Commercial products (API, Claude for Work, Claude Code): Commercial Terms prohibit model training on Customer Content; data processing is governed by DPA (including SCC, UK Addendum, Swiss Addendum). - Zero Data Retention: Available only to select API clients under separate agreements; applies only to qualifying API endpoints and products using a commercial organization key, not included in Claude for Work or consumer plans by default. - Commercial regions: The server location article confirms processing in select countries in the US/EU/Asia/Australia and default storage in US data centers for commercial products.​ The Regional Compliance page describes regional residency and processing options for AWS Bedrock, GCP Vertex, and Microsoft Foundry. Consumer Terms (consumer plans): https://www.anthropic.com/legal/consumer-terms - Privacy Policy (consumers and Commercial Services as controller): https://www.anthropic.com/legal/privacy - Commercial Terms (API / Claude for Work / enterprise): https://www.anthropic.com/legal/commercial-terms - Data Processing Addendum (DPA, enterprise/API): https://www.anthropic.com/legal/data-processing-addendum - Server Location (commercial products): https://privacy.claude.com/en/articles/7996890-where-are-your-servers-located-do-you-host-your-models-on-eu-servers - Regional Compliance (data residency/inference): https://claude.com/regional-compliance - Zero Data Retention (commercial API): https://privacy.claude.com/en/articles/8956058-i-have-a-zero-data-retention-agreement-with-anthropic-what-products-does-it-apply-to
5
Grok (xAI) Generic AI ProvidersData Processing and Transfer to the United States - In the TOS ("Privacy and Data Security" section), xAI states that "your personal information will be transferred to, and/or processed in, the United States." No declared data residency; the DPA indicates that enterprise customer data may be processed outside of Europe, including in the US and other countries where xAI and its subprocessors conduct processing operations. For users in the EEA/UK/CH, the Europe Specific Terms and the Europe Privacy Policy Addendum apply; data is processed in the USA, and for processors outside Europe, the EU-US Data Privacy Framework and/or standard contractual clauses apply. A DPA has been published (automatically incorporated into the Enterprise Terms) which identifies xAI as a processor, governs security and explicitly incorporates the Standard Contractual Clauses (SCCs, including Modules 2/3 and the UK Addendum) for data transfers from Europe.User data may be used for product improvement and model training, but logged-in users can choose whether to allow their User Content to be used for training. For non-logged-in users, where permitted, the user assumes full rights to use the data for product development and training. By default, xAI does not use enterprise clients' business data (inputs and outputs) to train models; this can only happen if the client provides separate, explicit consent (e.g., in exchange for free credits).You retain ownership of your User Content, but xAI is granted a broad, irrevocable, perpetual, sublicensable, and royalty-free license to use, copy, store, modify, publish, and create derivative works from your User Content (including for product improvement and model training purposes). In other words, you own your content, but xAI has broad rights to use that content and output. Customer retains rights to Input and Output belongs to Customer; xAI receives a limited license to User Content solely for the purpose of providing services and ensuring security/compliance, and may only use anonymized/de-identified data to improve its products.The Service is provided "AS IS," Warranty Disclaimers and Limitation of Liability - Maximum liability to you is limited to the greater of the amount paid by you or $100. You accept the risk of using the output and should verify the results (see "The Service Is Available 'As Is'" and "Limitation of Liability" sections). The Enterprise Terms provide for an exclusion of liability for consequential damages and an aggregate limit of liability for each party equal to the fees paid by the customer to xAI in the 12 months preceding the claim (with exceptions for, among other things, breaches of proprietary rights, confidentiality and indemnity obligations). Consumer users' personal data is processed in the USA; for European users, there is a separate privacy addendum describing the transfer mechanisms (DPF/SCC). - In consumer plans, xAI obtains a very broad license to User Content (input + output), including model training, with the possibility of opt-out and Private Chat mode without training. - In business/enterprise plans, input/output data is not used for model training (except with separate, express consent from the customer); the IP to Output belongs to the customer, and personal data processing and international transfers are governed by the DPA; security documentation describes additional measures (RBAC, SSO, encryption, environmental isolation, Enterprise Vault). - For consumers, Texas law, Tarrant County courts, and a 2-year statute of limitations apply, with modifications for European consumers in EST. Consumer Terms: https://x.ai/legal/terms-of-service - Privacy Policy: https://x.ai/legal/privacy-policy - Europe Privacy Policy Addendum: https://x.ai/legal/europe-privacy-policy-addendum - Enterprise Terms: https://x.ai/legal/terms-of-service-enterprise - Data Processing Addendum: https://x.ai/legal/data-processing-addendum - Consumer FAQ: https://x.ai/legal/faq - Enterprise FAQ: https://x.ai/legal/faq-enterprise - Security overview: https://x.ai/security
6
Microsoft Copilot Chat (Free) Generic AI Providers Data may be processed globally (including outside the EEA) in accordance with the Microsoft Privacy Statement. No Enterprise Plan - N/A Microsoft declares its compliance with applicable data protection laws (in the Microsoft Privacy Statement). EU users have rights under the GDPR, but the specific mechanisms/policies for data retention and localization are not detailed in the Terms of Use. GDPR compliance is at the level of the general commitment in the Microsoft Privacy Statement. No Enterprise Plan - N/AThe Copilot Terms state, "We don't own Your Content, but we may use Your Content to operate Copilot and improve it," and the Microsoft Privacy Statement describes the use of data (with automatic and manual processing) to develop and train AI models. No Enterprise Plan - N/A Microsoft explicitly states that it does not own "Your Content" and "does not claim ownership of Prompts [and] Creations"; you retain rights to the prompts and generated responses and grant Microsoft a broad license to use them on its services. No Enterprise Plan - N/A Strong warranty disclaimers and limitations of liability ("WE DO NOT MAKE ANY WARRANTY...", "Copilot is for entertainment purposes only..."). You are responsible for verifying the information and consequences of using Copilot's responses; furthermore, you agree to indemnify Microsoft for any claims arising from your use of Copilot. No Enterprise Plan - N/A Microsoft may use "Your Content" (prompts/responses) to operate and improve Copilot; data may be processed automatically and manually (human review) for security and quality purposes. - Copilot may use data from the internet and return responses that may be incomplete or erroneous; the Terms emphasize "Copilot can make mistakes... Always use your judgment." - Copilot (free) is intended for personal use only ("for your own personal use"); the documents do not define a separate business data isolation for this version. - Copilot and related experiences may contain advertising, and purchases initiated on Copilot are fulfilled by third-party merchants, where Microsoft is not the seller and payment processor. - Other products, such as Microsoft 365 Copilot for commercial customers, are subject to separate terms (Product Terms, Data Protection Addendum).https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/termsofuse https://www.microsoft.com/en-us/privacy/privacystatement https://www.microsoft.com/en-us/servicesagreement https://www.bing.com/new/termsofuse
7
Microsoft 365 Copilot (Paid Enterprise) Generic AI Providers No Free Plan - Not Applicable Microsoft 365 Copilot interaction content is stored at rest in the appropriate Local Region Geography associated with the tenant's country/region of registration. Advanced Data Residency and Multi-Geo further extend data residency obligations as documented. No Free Plan - Not Applicable GDPR compliance through the Data Processing Addendum (DPA) and Microsoft's standard cloud commitments. Service covered by applicable Microsoft data processing terms and security policies. No Free Plan - Not ApplicablePrompts, responses, and data available through Microsoft Graph in Microsoft 365 Copilot are not used to train foundation LLMs (there is no opt-in option to train these models on customer data within this service). No Free Plan - Not Applicable Customer retains ownership of its data and content submitted to the service; Microsoft does not claim copyright in Customer Data or in any results generated from Customer Data. No Free Plan - Not Applicable Standard Warranty Disclaimers and Limitations of Liability in Product Terms/Online Services Terms - Responsibility for use of Copilot results rests with the customer. The service is a paid add-on to select Microsoft 365/Office 365 plans; the official list price for the enterprise version is $30 user/month (local prices vary by currency and region). - Copilot interaction data (prompts, responses, semantic index) is stored "at rest" in the appropriate Local Region Geography consistent with the tenant's country/region; Advanced Data Residency and Multi‑Geo extend data residency obligations. - Prompts, responses, and Microsoft Graph data are not used to train foundation LLMs; optional user feedback may be processed to improve the service, but not used to train these models. - The customer retains rights to the input and output data; in addition, the Customer Copyright Commitment provides for the defense of the customer against certain copyright infringement claims related to Copilot output (if conditions are met). - The general limitations of liability and warranty disclaimers in the license agreement/Product Terms apply. no specific guarantees of output correctness beyond the standard obligations of online services. Product Terms – Online Services (Microsoft Generative AI Services, including Copilot): https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all - Product offering – MicrosoftCopilot: https://www.microsoft.com/licensing/terms/productoffering/MicrosoftCopilot - Microsoft Products and Services Data Protection Addendum (DPA): https://www.microsoft.com/licensing/docs/view/microsoft-products-and-services-data-protection-addendum-dpa - Microsoft Enterprise AI Services Code of Conduct: https://learn.microsoft.com/en-us/legal/ai-code-of-conduct - Data Residency for Microsoft 365 Copilot and Copilot Chat: https://learn.microsoft.com/en-us/microsoft-365/enterprise/m365-dr-workload-copilot - Microsoft 365 Copilot – privacy note (source of the document "Data, privacy, and security for Microsoft 365 Copilot"): https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/copilot/microsoft-365-copilot-privacy.md
8
Mistral (Le Chat) Generic AI ProvidersBy default, data is hosted in the EU; it is possible to use a US endpoint (then data in the US) and temporary transfers outside the EU in accordance with the list of subprocessors. By default, your organization's data is hosted in the EU; transfers outside the EU are possible (with SCC) and, for Enterprise solutions, you can disable certain features that trigger transfers outside the EU. Zero Data Retention is not available for Le Chat (regardless of plan). Mistral prioritizes suppliers in the EU who strictly comply with the GDPR. Mistral ensures GDPR compliance, with particular emphasis on data protection and user rights in accordance with EU regulations. In Le Chat Free, Pro, and Student, data (input and output) is used to train models by default, unless the user disables this option in the settings. Data from connectors is never used for training. Feedback (thumbs up/down with commentary) can be used for training regardless of the plan.In Le Chat Team and Le Chat Enterprise, conversation data is not used for model training. Connector data is not used for training in any plan. Feedback (thumbs up/down with comment) remains an exception and can be used for training in enterprise plans as well. The user retains all copyrights to the results generated but should always verify their accuracy. The user retains all copyrights to the results generated but should always verify their accuracy. The User is solely responsible for the use and verification of the Output; Mistral excludes liability for indirect/special/consequential damages and limits its aggregate liability (for all plans) to the greater of the fees paid for the product in question in the last 12 months or EUR 100.The liability rules (exclusion of indirect damages, amount limit) are the same as for other plans; in addition, Mistral undertakes to indemnify the customer only in the event of claims for infringement of intellectual property rights by Mistral AI Products themselves, with numerous exclusions (including Input, Output modifications). - ANSSI II-901 (France only) - very good GDPR compliance - data from Connectors is never used for training in any plan - Le Chat Team/Enterprise has input/output training disabled by default. https://mistral.ai/terms https://legal.mistral.ai/terms/privacy-policy https://legal.mistral.ai/terms/data-processing-addendum
9
Perplexity Generic AI Providers Data is processed and stored in AWS regions in the United States (US East - N. Virginia). Defaults to US (AWS), with EEA and Canada data residency configuration options available for business customers.Perplexity acts as a controller and ensures compliance through the Data Privacy Framework and Standard Contractual Clauses. Perplexity operates as a Processor under a DPA, offering SOC 2 Type II certification and a BAA option for HIPAA. By default, the data is used to train models unless the user disables the "AI Data Usage" option in the settings. Data entered by enterprise users and via the API is never used to train models. The user retains ownership of the input, and Perplexity transfers any rights it holds in the output to the user. The client retains full ownership of the input and has exclusive rights to the output under the contractual provisions. Supplier liability is limited to $100 or the total of charges from the last six months.Liability for factual errors is excluded and compensation limits are determined individually in the Order Forms. - Use of external models (e.g., GPT-4) is additionally subject to their policies, if selected in the settings. - Copyright does not apply to Search API results, and files in the Enterprise plan are subject to a 7-day retention period. https://www.perplexity.ai/hub/legal/privacy-policy https://www.perplexity.ai/hub/legal/aup https://www.perplexity.ai/hub/legal/perplexity-api-terms-of-service-search perplexity.ai/hub/legal/terms-of-service; perplexity.ai/hub/legal/enterprise-terms-of-service.
10
OpenAI API AI API Providers Data submitted to the OpenAI API may be processed on OpenAI servers in the US and other jurisdictions; there is no publicly guaranteed EU data residency for standard API accounts, and data (including abuse logs) is processed on OpenAI infrastructure.Data residency for eligible API clients: the ability to store data at rest in the US, Europe, UK and several other countries and, for supported endpoints, select the processing region (US or Europe) as part of advanced data controls. Subject to the OpenAI Services Agreement, which incorporates the Data Processing Addendum for personal data processing; the DPA describes controller/processor roles, transfers (including SCCs), and other requirements under the GDPR. DPA (Data Processing Addendum) available for enterprise customers; the ability to sign a data processing agreement that takes into account GDPR requirements, as well as data residency options and additional security measures in line with legal requirements. Data sent via the API is not used to train models unless the user explicitly opts in. OpenAI may still retain data for short-term operational and security purposes.Data sent via API is not used for model training unless the customer explicitly consents; eligible customers can additionally enable Zero Data Retention or Modified Abuse Monitoring, which applies to retention of abuse logs and application health, not model training. To the extent permitted by law, you/customer retain ownership rights to Input and Output ("you own the Output"). The Client retains the rights to its data and Output. Standard Terms of Use/OpenAI contain disclaimers, no warranties, and liability limits - use at your own risk; no extended warranties for free accounts. The same liability provisions of the OpenAI Services Agreement apply (AS IS model, exclusion of liability for consequential damages, liability limit based on 12 months of fees with specified exceptions).1) The main difference from ChatGPT Web: data sent to the API is not used for model training (as of March 1, 2023), with default abuse log retention of 30 days for most endpoints and endpoint-specific application state retention policies. 2) Eligible customers can enable Modified Abuse Monitoring or Zero Data Retention, which limits or eliminates the recording of customer content in abuse logs and—for selected endpoints—application state. 3) Data residency (at the processing region level and at rest storage), DPA, and advanced retention controls are available to eligible API and ChatGPT Enterprise/Business/Edu/Healthcare customers; the documentation does not limit these to plans described as "enterprise," although in practice they require business qualification. OpenAI Services Agreement (API, ChatGPT Business/Enterprise): https://openai.com/policies/services-agreement/ - Data Processing Addendum (API Services, ChatGPT Enterprise): https://openai.com/policies/data-processing-addendum/ - Data controls in the OpenAI platform (retention, training data from API, ZDR/Modified Abuse Monitoring): https://platform.openai.com/docs/guides/your-data - Business data privacy, security, and compliance (compliance, certifications, data residency, ZDR, DPA): https://openai.com/business-data/ - Data residency for the OpenAI API (technical assistance on selecting a region): https://help.openai.com/en/articles/10503543-data-residency-for-the-openai-api - General Privacy Policy (including information on data processing by OpenAI) OpCo/OpenAI Ireland): https://openai.com/policies/privacy-policy/
11
Claude API AI API ProvidersCustomer data may be processed on infrastructure in selected countries in the US, Europe, Asia, and Australia, with default storage in US data centers unless otherwise agreed; standard API log retention is 30 days (with exceptions described in the retention article). The same default location model applies as for other commercial customers: processing possible on infrastructure in the US, Europe, Asia and Australia, with data stored in US data centers by default, unless otherwise agreed in the contract. The Claude API is covered by Commercial Terms and an automatically enabled Data Processing Addendum (DPA) with SCCs; for commercial customers, Anthropic acts as a data processor, and the DPA and documents in the Trust Center are the primary mechanism for ensuring GDPR compliance.Enterprise customers of Claude API are also subject to the Commercial Terms and the same DPA (with SCC); additional features include configurable retention periods for Enterprise plans and data processing location options ("unless otherwise agreed"). Detailed requirements and certifications are documented in the DPA and Trust Center; there is no separate public "enterprise-only" version of the terms. Input and output data are not used for model training by default. They may only be used after explicit customer action, such as participation in the Development Partner Program, informed feedback/bug reporting, or other consent provided in the documentation; otherwise, API data is not used for model training.Enterprise Claude API clients have the same default policy as other commercial clients: API input/output data is not used to train models unless the client explicitly consents (e.g., affiliate programs, reported feedback). The Client retains the rights to the content provided (Prompts/Customer Content), and Anthropic assigns to the Client all of its rights to the generated Outputs; Anthropic cannot train models on Customer Content as part of these services. The Client owns all rights to the Outputs, and Anthropic assigns its rights to the Outputs to the Client and does not acquire any rights to the Customer Content; in addition, the Client is provided with extended protection in the form of indemnity for copyright infringement claims arising from the authorized use of the Services and Outputs.The Commercial Terms provide broad disclaimers (the service and Outputy are provided "as is", with no guarantee of accuracy, completeness or uninterrupted operation) and limit each party's liability to the fees paid by the customer for the services in the last 12 months, excluding indirect damages (lost profits, lost data, etc.). The Commercial Terms apply to Claude API enterprise customers (unless a separate, non-public agreement has been entered into); they provide for no guarantee of accuracy of Outputs and a limitation of liability to charges from the last 12 months, excluding indemnification obligations. There are no separate, public liability terms specific to enterprise plans.– Claude API is subject to the Commercial Terms, while Claude Free/Pro/Max and other consumer services are subject to the Consumer Terms; help documents and articles are separated into "consumer" and "commercial" sections. – Zero Data Retention is only available to select enterprise API clients (and products using a commercial organization's API key, including Claude Code); it excludes, among others, Claude for Work, beta products, and others, unless otherwise agreed upon. As part of ZDR, Anthropic still retains security score ratings for Usage Policy enforcement. – Default retention for Anthropic API is 30 days on the backend (with exceptions: e.g., Files API, ZDR, Usage Policy violations, legal requirements); custom retention periods configurable at the organization level are available for enterprise clients. – Data from commercial products (Claude API, Claude for Work) is not used for model training by default; use for training requires explicit participation in partner programs or informed feedback/bug reporting. – Detailed compliance issues (DPA, SCC, processor role, certifications) are largely documented publicly in the DPA, Privacy Center, and Trust Center, rather than solely in classified commercial agreements.– Main Portal: https://www.anthropic.com/legal – Commercial Terms (Claude API and other commercial offers): https://www.anthropic.com/legal/commercial-terms – Data Processing Addendum (DPA): https://www.anthropic.com/legal/data-processing-addendum – Consumer Terms (for consumer services, not for API): https://www.anthropic.com/legal/consumer-terms – Privacy Center – Commercial Customers section (Claude API, Claude for Work): https://privacy.claude.com/en/collections/10663361-commercial-customers
12
GitHub Copilot (Free, Pro, Pro+) Assistant Programmer Consumer user data may be processed and stored in the U.S. and other jurisdictions (including the EU). A Consumer (Pro) account does not guarantee permanent data residency in a specific region. ONData processing is subject to the GitHub Privacy Statement and applicable regulations (including GDPR). For individual accounts, there is no automatic DPA; individual rights (e.g., access, deletion) are exercised in accordance with the privacy policy. ON Data collected by GitHub Copilot (including prompts, suggestions, and code snippets) can be used to train models for the Copilot Free plan if the user allows it in the settings. For Pro/Pro+, separate product materials describe the ability to opt out of sharing fine-tuning prompts. ON Suggestions (results generated by Copilot) are not owned by GitHub - you retain ownership of the code you create and are responsible for any use of the suggestions (please check the licenses/origin of snippets if Copilot matches public code). ONThe suggestions provided by Copilot are provided "as is"; the user is responsible for the use of the suggestions (in particular for errors, infringements of third-party rights, etc.). GitHub limits its liability in accordance with the general terms and conditions (Terms). ON For Copilot Free: Users can allow or prohibit the use of their data for model training in the settings. - Suggestions are not "Content" until they are uploaded to GitHub.com; this means different rights/sharing depending on where they are used. - If you enable the "Allow Copilot to provide Suggestions matching public code" option in the settings, you must comply with the license of the cited code. - The list of available models (e.g., external AI models) and usage limits may change; information about specific models (e.g., Claude, GPT‑4.x) and possible message limits are integration-dependent and not guaranteed for the free plan. - Always verify Suggestions before using them in production and consider the legal risks (possible matches to public code). GitHub Terms of Service: https://docs.github.com/en/site-policy/github-terms/github-terms-of-service - GitHub Terms for Additional Products and Features ("GitHub Copilot" section): https://docs.github.com/en/site-policy/github-terms/github-terms-for-additional-products-and-features - GitHub General Privacy Statement: https://docs.github.com/en/site-policy/privacy-policies/github-general-privacy-statement
13
GitHub Copilot (Business and Enterprise) Assistant Programmer ON Data residency runs on the GitHub Enterprise Cloud; available regions for storing code and user data are the EU, US, Australia, and (from 2025) Japan. ON For Enterprise Cloud plans, GitHub offers compliance mechanisms (Data Processing Addendum, Standard Contractual Clauses/SCCs where applicable) and tools to facilitate GDPR compliance. ONData (prompts, code, suggestions) from Copilot Business and Copilot Enterprise plans is not used for model training; there is no opt-in option to train models on these customers' data. ON GitHub does not claim copyright on the generated suggestions. Organizations/users are responsible for verifying compliance with the licenses of the snippets used and their internal policies. ON Responsibility for the use of the output rests with the customer, but for paid Copilot plans, the Microsoft Customer/Copilot Copyright Commitment applies – defense and indemnity for certain copyright claims relating to the output, provided the terms of this commitment are met. Copilot Enterprise offers, among other things, indexing of organization code and access to private, tunable models based on customer repositories; Copilot Business/Enterprise runs in an environment covered by GitHub DPA, and customer data from these plans is not used for model training. GitHub Copilot Product Specific Terms (Copilot Business/Enterprise): https://github.com/customer-terms/github-copilot-product-specific-terms - GitHub Customer Agreement (General Terms + list of Product Specific Terms): https://github.com/customer-terms/ - GitHub Data Protection Agreement (DPA): https://github.com/customer-terms/github-data-protection-agreement - GitHub Terms for Additional Products and Features (Copilot section): https://docs.github.com/en/site-policy/github-terms/github-terms-for-additional-products-and-features - GitHub General Privacy Statement: https://docs.github.com/en/site-policy/privacy-policies/github-general-privacy-statement
14
Claude Code Assistant Programmer Data from Free/Pro/Max accounts (including Claude Code) may be transferred to and stored on Anthropic's infrastructure and its subprocessors in the US and other countries outside the EEA/UK; the documentation does not confirm exclusive processing in the US or an EU-only option for consumer accounts.For commercial products (Claude for Work, API, Claude Code under Commercial Terms), data may be processed in selected countries in the USA, Europe, Asia and Australia; by default, data is stored in data centers in the USA, unless otherwise agreed. Anthropic acts as a data controller; the Privacy Policy describes GDPR rights (access, erasure, rectification, objection, etc.) and provides for data transfers to the US and other countries outside the EEA/UK using, among other things, adequacy decisions and standard contractual clauses (SCCs). The Data Processing Addendum incorporated into the Commercial Terms applies; the DPA provides for the use of standard contractual clauses (SCCs, modules 2 and 3) for transfers outside the EEA/UK and a description of technical and organizational security measures. The Claude Code specifies a standard retention of 30 days and a Zero Data Retention option for appropriately configured API keys.User Content may be used to maintain and improve the Services and train models, unless you opt out of training in your account settings; exceptions: Content submitted as Feedback or marked for security review may be used even if you opt out. The Commercial Terms and Claude Code documentation indicate that Anthropic does not train models on Customer Content (Inputs and Outputs) from commercial services, including Claude Code, unless the customer explicitly chooses to share the data as part of a Development Partner Program. As long as you comply with the Terms, Anthropic assigns to you all of its right, title and interest (if any) in the Outputs – meaning you receive the rights to the generated results.Under the Commercial Terms, the customer retains rights to the Inputs and Anthropic transfers to the customer all of its rights (if any) to the Outputs; this also applies to the use of the Claude Code under the Commercial Terms Limits of Liability for Consumer Plans: Anthropic's aggregate liability will not exceed the greater of the amount paid for access to the Services in the last 6 months or $100; in addition, there are broad exclusions of warranties as to the accuracy, completeness, and usefulness of the Outputs The Commercial Terms (applicable to, among others, Team/Enterprise and API plans, as well as Claude Code under those terms) provide broad warranty exclusions and a liability limit of fees paid for services in the previous 12 months, excluding claims covered by mutual indemnification obligations.For Free/Pro/Max accounts (including Claude Code), opt-in/opt-out is available for using data for model training, with exceptions for materials submitted as Feedback and flagged for security review; retention for Claude Code: 5 years with training enabled, 30 days with training disabled. For commercial customers (Team/Enterprise, API, Claude Code under Commercial Terms), no training is allowed for Customer Content, standard retention of 30 days with Zero Data Retention for properly configured API keys, and BAA (HIPAA) can be extended to Claude Code if the customer has BAA and enables Zero Data Retention. Commercial data can be processed in the US, Europe, Asia, and Australia regions, with default storage in the US; public documentation does not confirm guaranteed hosting only in the EU.https://code.claude.com/docs/en/data-usage https://code.claude.com/docs/en/legal-and-compliance https://www.anthropic.com/legal/consumer-terms; https://www.anthropic.com/legal/commercial-terms https://www.anthropic.com/legal/privacy https://www.anthropic.com/legal/data-processing-addendum https://privacy.claude.com/en/articles/7996890-where-are-your-servers-located-do-you-host-your-models-on-eu-servers
15
Cursor (IDE) Assistant Programmer Infrastructure primarily on AWS in the US, with some services in Europe and Singapore; Privacy Mode does not permanently store code with model providers (zero data retention), but Privacy Mode is not the default for all users. Same technical regions (AWS in the US + select services in Europe/Singapore and model provider regions); for team accounts, Privacy Mode may be forced. There is no detailed level of "GDPR compliance" in the TOS - Cursor refers to the Privacy Policy and offers privacy mechanisms (Privacy Mode) and SOC 2.Only the general terms of the Privacy Policy (including transfers from the EEA based on lawful mechanisms) and a general reference to the MSA are publicly available; there is no public DPA and no detailed enterprise-specific GDPR terms. Content (Inputs/Suggestions) is not used to train models or shared with third parties for training, except: (a) in security cases, (b) in the event of user-submitted feedback, or (c) when the user explicitly consents (e.g., through appropriate privacy settings, including disabling Privacy Mode / Share Data). Privacy Mode can be enforced, which Security and Data Use believes precludes training models on code; there are no publicly described additional contractual assurances (DPAs) beyond the general rule in the ToS that Content is not used for training without explicit consent.You retain rights to your Inputs; Anysphere grants you all of its rights (if any) to Suggestions – in other words, you receive rights to the generated suggestions. The TOS states that you retain rights to Inputs and receive rights from Anysphere to Suggestions. The TOS contains strong disclaimers (no warranties, "as is") and a limitation of liability clause: maximum liability limited to the greater of the amount paid in 6 months or $100. You are responsible for the evaluation and risk of using the suggestions. In the absence of an MSA, the general liability limit from the ToS (the greater of the amount paid in 6 months or $100) applies. There are no public enterprise terms showing different limits. Anysphere does not use Content to train models, nor does it allow third parties to do so, except (a) for security purposes, (b) through reported Feedback, or (c) when the user has explicitly provided consent (training/Privacy Mode settings). - Cursor is SOC 2 Type II certified (described on the Security page). - For teams/businesses, Privacy Mode may be organizationally enforced; no public information exists about deployments in private VPCs or self-hosted environments. - Auto-Code Execution and Beta Services are subject to additional disclaimers and are used at the user's sole risk. https://cursor.com/terms-of-service https://cursor.com/privacy https://cursor.com/security https://cursor.com/data-use
16
Codeium (Windsurf) Assistant Programmer Servers for individual plans are located in the USA; personal data of users outside the USA may be transferred to the USA and other countries using mechanisms such as Standard Contractual Clauses; ZDR available as an option.Standard (Windsurf servers in the US), FedRAMP High (AWS GovCloud in the US), EU (servers in Frankfurt, Germany) and Enterprise Hybrid and Self-hosted modes, where data is processed in the customer's infrastructure (on-prem/private cloud/air-gapped). The Privacy Policy includes a section on "LEGAL BASES FOR PROCESSING EUROPEAN PERSONAL INFORMATION" and a description of the rights of EEA/UK users and transfers to the US; Exafunction acts as an independent controller of your account personal data, not a processor; no separate DPA for individual users. Exafunction indicates that it generally does not have a classic DPA and acts as an independent controller of limited personal data; the processing of Customer Data in the enterprise is governed by the MSA (including the Data Use Guidelines) and the Privacy Policy, which describe, among other things, transfers from the EEA/UK and the legal basis.Logs, Prompts, and Outputs can be used to train, develop, and improve AI/ML models; Autocomplete User Content is used to train discriminative (ranking) models, Chat User Content is also used to train generative models; the user has opt-out/ZDR options that limit retention and training use. Customer Data is not used for model training; it is transmitted solely for the purpose of generating Suggestions, deleted after generation, and is not used for model training, except when the customer selects a model marked "(no ZDR)" or features requiring persistent retention (e.g., Remote Indexing, Memories, etc.); Customer Data will not be used for model training/optimization without the customer's written consent.In the Individual & Pro TOS, the user retains the rights to his/her User Content (input); Suggestions are part of Exafunction's "Materials" and there is no express transfer of rights to Suggestions to the user in these terms; however, the Security page states that the user "owns all generated code, to the extent permitted by law." Exafunction grants the user rights to the generated suggestions. Exafunction disclaims most warranties (no warranty, no warranty of fitness for purpose, etc.). Section 16 limits liability - for example, aggregate liability limited to the greater of (a) the amount paid in 12 months or (b) $100.The Teams TOS and MSA provide similar warranty exclusions and a liability limit (generally up to the amount of fees paid in the 12 months prior to the event); there is no public information about other standard liability limits for Enterprise beyond these documents. – Zero-data retention is enabled by default for Teams and Enterprise plans; individual users can manually enable it; exceptions include "(no ZDR)" models and features that require data persistence (Remote Indexing, Memories, Recipes, Web Retrieval, etc.). – The ToS includes arbitration clauses and a class action ban for individual and Teams users. – The Security page lists SOC 2 Type II and FedRAMP High; these certifications are not listed in the ToS itself. https://windsurf.com/terms-of-service-teams https://windsurf.com/terms-of-service-individual https://windsurf.com/privacy-policy https://windsurf.com/security https://windsurf.com/msa https://windsurf.com/dpa
17
Tabnine Assistant Programmer The physical locations of the regions are not explicitly specified in the published Terms. Self-hosted / on-premise, VPC, private cloud, air-gapped (can be hosted in a selected environment/region). Tabnine documentation and marketing website declare: "deploy anywhere — cloud, on-premise, or air-gapped." No "GDPR compliant" declaration; the Privacy Policy describes the roles of the controller/processor, the rights of EU/EEA users, the representative in the EU and the possibility of lodging complaints with a supervisory authority, but without a formal "level" of compliance. There are no publicly available separate GDPR terms for the Enterprise plan; the same Privacy Policy applies (same rights and roles), and in self-hosted/air-gapped mode, the documentation indicates that code and PII are not sent to Tabnine servers, but there is no guarantee of regulatory compliance – it depends on client-side implementation.Standard Tabnine models (code completion, Tabnine Protected chat) are trained exclusively on open source with permissive licenses; the "Code Privacy" documentation declares a "no-train-no-retain" policy – user code serves only as ephemeral context for inference, it is not used to train collaborative models. Customization of models possible on client code: Terms describe Tailor Made Services, where client code is used to "adjust and upgrade" the engine only for that client, without granting Tabnine ownership of the code; "Code Privacy" indicates private, fine-tuned AI models trained on private code, accessible only to the client team. Tabnine Protected models remain trained exclusively on open-source code.Tabnine-suggested code that you accept and incorporate into your code is considered part of your own code. Tabnine does not claim ownership of your code; it grants you a non-exclusive, royalty-free, and perpetual license to use the suggestions. Suggestions accepted by a user become part of their code; Tabnine does not acquire ownership of the user's code. For "Tailor Made" services, the client's code is used solely to tailor the service to that client and does not grant Tabnine ownership of that code. Tabnine disclaims all warranties (express and implied), makes no representations as to the accuracy or usefulness of the code suggestions, and limits liability for damages - it is the user's responsibility to verify and comply with the licenses.Tabnine disclaims all warranties; in addition, Multi‑Model Addendum shifts the risk and liability for the use of third‑party models to the customer/model provider (no indemnity for Third‑Party Models). - Tabnine Pre‑Trained Models are trained on public, open‑source (FOSS) code – see Terms (Multi‑Model Addendum). - By default, Tabnine declares a "zero data retention" policy / full code privacy with appropriate configuration; at the same time, there is a "Tailor Made" service that uses the provided code solely to create a personalized model with the client's consent. - The free plan is not intended for ongoing commercial use. - The use of external models entails separate liability and no indemnity on the part of Tabnine. - Governing law: Israel; jurisdiction: Tel‑Aviv courts; deadline for filing claims: 1 year (details in Terms).https://www.tabnine.com/terms-of-use/ https://www.tabnine.com/privacy-policy/
18
Amazon Q Developer (CodeWhisperer) Assistant Programmer Data is stored only in US regions: data from the "diagnosing console error" session in US West (Oregon) (us‑west‑2), and the remaining data (questions, answers, code) in US East (N. Virginia) (us‑east‑1). The user does not select the storage region, and inference can be performed in other regions within the same geographic zone (cross‑region inference). In Amazon Q Developer Pro, data is stored in the region in which the Amazon Q Developer profile was created (e.g., us‑east‑1 or eu‑central‑1); inference can be performed in other regions within the same geographic zone (e.g., for the EU profile: eu‑central‑1, eu‑west‑1, eu‑west‑3, eu‑north‑1).As a cloud provider, AWS provides GDPR-compliant measures (DPA, data control mechanisms, and data subject request handling tools). Amazon CodeWhisperer runs on AWS infrastructure and is subject to these terms—however, the customer (data controller) is responsible for application compliance. In the enterprise model, AWS provides the same legal and technical mechanisms (DPA, configuration options, security tools). Organizations should enter into appropriate addenda/agreements and configure the service to meet the data controller's obligations. Content (Query questions, answers, generated code) can be stored and used for service improvement, including debugging and training models/foundation models, unless the user opts out. Telemetry (without actual code) can be collected for both plans (Free and Pro), also with opt-out.In Pro and Business, user content (including code, questions, answers) is not used for service improvement or to train core models; however, AWS may collect customer telemetry for service improvement purposes, with opt-out capabilities in the IDE and at the organization level. Output generated by Amazon Q Developer is considered "Your Content", just like input (code, questions, etc.); the FAQ specifies that the user "owns the code that you write, including any code suggestions provided by Amazon Q Developer." As in the Free plan: input and output are "Your Content" (no transfer of rights to AWS), with the difference that for Pro the output is additionally covered by the IP indemnity clause as "Generative AI Output" generated by the "Indemnified Generative AI Service" (Amazon Q excluding Amazon Q Developer Free Tier).No specific guarantees of output accuracy are made for Amazon Q Developer Free; the general warranty disclaimers and limitations of liability in the AWS Customer Agreement/AWS Service Terms (including the AI Services and Amazon Q sections) apply: the service is provided "as is," and you are responsible for verifying and testing the generated code. No guarantee of output correctness and standard limitations of liability from the AWS Customer Agreement / Service Terms; for Amazon Q (excluding Free Tier) IP indemnity applies for intellectual property infringements resulting from the output, but this does not change the general principle that the customer must verify the technical correctness and security of the generated code. Free: data (except diagnostics) always stored in us‑east‑1, diagnostics in us‑west‑2; content can be used for service improvement and model training, unless opt-out is used (Organizations + IDE). - Pro: data stored in the profile region (e.g., us‑east‑1 or eu‑central‑1), content is not used for service improvement or FM training; IP indemnity for output and optional data encryption with the client's KMS key are available for some functions.https://aws.amazon.com/legal/service-terms/ -- for privacy/data details: https://aws.amazon.com/privacy/ and CodeWhisperer documentation: https://docs.aws.amazon.com/codewhisperer/latest/userguide/ (see the 'Sharing your data with AWS' section and opt-out settings). - Amazon Q Developer FAQs (including data location, content usage, code ownership): https://aws.amazon.com/q/developer/faqs/ - Amazon Q Developer service improvement: https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/service-improvement.html - Opt out of data sharing in the IDE and command line: https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/opt-out-IDE.html - Cross-region inference in Amazon Q Developer: https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/cross-region-processing.html - Data encryption in Amazon Q Developer: https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/data-encryption.html - AI services opt-out policies (Organizations): https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_ai-opt-out_all.html - AWS Service Terms (AI Services / Amazon Q / IP indemnity): https://aws.amazon.com/service-terms/
19
Roo Code Assistant Programmer Data is processed primarily in the United States (AWS, Snowflake, PostHog), with selected subprocessors in Europe (Stripe, Vercel). The basic terms do not guarantee data residency, but the Entity Agreement allows for negotiations in this regard. The Terms allow international data transfers and require GDPR compliance, but do not claim automatic certifications or full 'certified' compliance. According to the official Trust Center, the status of full GDPR compliance is currently listed as "In Progress." Roo Code is SOC 2 Type II certified, while GDPR compliance remains in "Pending" status. Training models on source code and user prompts is strictly prohibited; only anonymized derived data is used to improve services.The standard terms allow the use of derived data, but the Entity Agreement may implement a complete exclusion of the collection of any data. The Terms of Use do not explicitly grant the user full copyright to the generated output. The user retains ownership of the input data (§6.3), while the generated output is defined as the user's Confidential Information (§10.1). Copyright transfers or specific IP protection terms are negotiated as part of the overarching Entity Agreement. The service is provided on an "AS IS" basis and liability is limited to $100 or the amount paid in the last 12 months. Liability limits are standard unless the Entity Agreement specifies different indemnity terms. Data Processing Addendum (Schedule 1) applies to data processing subject to GDPR; Roo Code may transfer data outside the EU (including to the US). - Outputs generated by the Software are treated as Confidential Information in the Terms of Service – no clear transfer of copyright is provided in the standard TOS. - Roo Code reserves the right to use Derived Data to improve its products. - Users may not submit personal/sensitive data without Roo Code's written consent (§4.7). https://roocode.com/terms https://roocode.com/privacy https://roocode.com/legal/subprocessors https://trust.roocode.com
20
OpenClaw Assistant Programmer Depends on the server location selected by the user (e.g. local PC, Oracle Free Tier). Choose from 138+ global locations or your on-premise infrastructure. No built-in compliance; the user as controller (Data Controller) is responsible for compliance of processing.Full compliance ensured by hosting provider DPA and SOC2/HIPAA certifications for infrastructure. OpenClaw does not train models; the connected API (OpenAI/Anthropic) does not use API data for training by default. No learning; guaranteed by corporate contracts of LLM model providers (e.g. Azure OpenAI). The rights to the output are owned by the user (according to the API provider's ToS); the input remains with the user. Ownership of output and retention of rights to input confirmed in dedicated business agreements. Completely disabled; software provided under the MIT License "as is". Limited to hosting SLA; no responsibility for the substantive correctness of generated AI content. Requires own API keys; security risk in the absence of sandboxing and gateway isolation. https://openclaw.ai/
21
Continue Assistant ProgrammerData is processed locally; telemetry goes to PostHog (USA) and code to the user's chosen API provider. Possibility of hosting the Mission Control platform in the client's VPC or on-premise, which guarantees full data sovereignty. High; local-first architecture ensures that code does not leave the machine without explicit configuration of external services. Full compliance through DPA and support for isolated air-gap environments and SOC 2 standards. No; the tool provider does not collect or store users' source code for model training. No; these plans default to a strict zero-data-retention policy. Input rights retained; no claims by the provider on output under the Apache 2.0 license. Retention of rights to input; intellectual property to output is usually guaranteed in the contract.Complete disclaimer of liability of the supplier under the terms of the open-source license ("AS IS"). No publicly available terms; typically negotiable in an individual business agreement. Local-first model requiring own API keys or local models (e.g., Ollama); no vendor lock-in. https://www.continue.dev/privacy https://github.com/continuedev/continue/blob/main/LICENSE
22
Sourcegraph Cody AI Code Analysis Cloud plan users' data (including Free/Pro) is generally stored and processed in the United States; employees, contractors, and subprocessors may also process data outside the U.S. There is no publicly disclosed option to select/limit the processing region for Free/Consumer plans.Self-hosted: Data (including code, prompts, and results) remains on the customer's infrastructure; Sourcegraph has no access except for support purposes. Enterprise Cloud (single-tenant): Customer data may be stored and processed in locations, with the stated hosting capability in specific regions that meet data residency requirements; no comprehensive list of regions is publicly available. Cloud plans are subject to a general Privacy Policy that describes the legal basis for processing, the rights of EU/EEA users (access, rectification, restriction, erasure, etc.), and transfers outside the EEA. Sourcegraph is CCPA compliant and operates in accordance with GDPR data protection regulations, but does not have a separate "GDPR certification." Cody Enterprise declared as GDPR compliant (Sourcegraph indicates compliance and SOC 2 Type II and CCPA compliance in product materials for Enterprise).No model training on Enterprise customer data; there is no separate, plan-specific declaration one way or the other for Free/Pro plans. The Privacy Policy allows the use of personal data for, among other things, 'operate, maintain, improve, and provide' Services, but does not explicitly state that data from Free/Pro plans is (or is not) used for model training. Cody Enterprise: Sourcegraph declares that customer data is NOT used for model training and offers a Zero Data Retention option. For the cloud plan: no clear, separate provision granting full copyright transfer to user outputs in public documentation. Sourcegraph declares that the customer retains ownership of the code, prompts, and results ('all code, prompts, and results are owned by the enterprise'; 'You retain ownership of all Sourcegraph inputs and outputs') in both the self-hosted and Enterprise Cloud variants.The available Enterprise license for Software includes classic 'AS IS' provisions and a broad exclusion of liability of authors/copyright holders for damages resulting from the use of the Software, but this does not directly determine liability for substantive errors in the output of cloud services for Free/Pro users. The publicly available AI Terms (in teasers) and Enterprise materials primarily describe unlimited damages protection (uncapped indemnity) for claims for IP infringement by AI outputs ('Sourcegraph will indemnify you against any claims alleging that your use of AI Tools or any Outputs infringe third-party intellectual property', 'Uncapped indemnity…'). Enterprise declares: no model training on client data, zero-retention for LLM inference (partner models do not retain code/prompts/results beyond the time necessary to generate a response), and full IP indemnity for code generated by Sourcegraph; Cody Enterprise is described as a SOC 2 Type II, CCPA compliant, and operating in accordance with GDPR regulations. - Zero-retention is described in the documents at the LLM model/partner level (no retention on the model providers' side, processing only for the response); a separate Privacy Policy provides for standard logging/analytics on the Sourcegraph side (e.g., usage data, access logs) with appropriate retention periods. - In the Free/Pro plans, data is processed in the Sourcegraph cloud; the Cody FAQ documentation indicates that in self-hosted mode, code and prompt snippets are sent to an external LLM provider (Anthropic or OpenAI by default), which means third parties participate in prompt processing, albeit under contractual agreements as sub-processors. - Cross‑repo context/multiple repository context features are described in product documentation and marketing, but do not introduce separate legal clauses beyond the scope described above (relevant code snippets are sent as context). ToS Index: https://sourcegraph.com/terms - Privacy Policy: https://sourcegraph.com/terms/privacy - AI Terms (AI Tools, Outputs, IP indemnity): https://sourcegraph.com/terms/ai-terms - Data Processing Addendum (DPA, including SCC): official PDF of "Sourcegraph DPA": https://info.sourcegraph.com/hubfs/Sourcegraph%20DPA.pdf - List of sub‑processors: https://sourcegraph.com/terms/subprocessors
23
Greptile Chat and AI Bot AI Code Analysis USA (AWS, Azure, DynamoDB for logs). Customer location (VPC/On-prem). Yes, while maintaining the principles of PII protection and standard contractual clauses. Yes, full support and custom DPA, including the ability to disable logs. Yes, for analytical and service improvement purposes, with an opt-out option. By default, no, unless the customer consents The client owns the code and results, the rights belong to the client (subject to feedback). The client owns the code and results, the rights belong to the client (subject to feedback). Completely disabled on the provider side. No guarantee of AI response accuracy.Limited in accordance with individual agreement. - Free for open-source and 14-day trial. - Option to enable 100% private mode. https://www.greptile.com/terms-of-service https://www.greptile.com/security
24
HuggingFace Hub AI infrastructure Repositories are hosted in the US region; there's no region selection option. Public repositories have a 'best effort' policy – documentation indicates up to about 5 TB for significant projects. Team/Enterprise plans offer Storage Regions, allowing you to choose the region where your repositories are stored (US or EU; APAC - 'coming soon'). Enterprise also offers increased storage pools (e.g., 200 TB + 1 TB per seat for Enterprise; 500 TB + 1 TB per seat for Enterprise Plus). Subject to applicable data protection laws, including the GDPR for EU users (Privacy Policy). The free plan does not guarantee additional contractual mechanisms (DPAs) or data residency.Enterprise plans (including Enterprise Hub) provide a DPA and additional GDPR compliance mechanisms. The Enterprise Hub Supplemental Terms stipulate that data processed under this service is in accordance with the Hugging Face Data Processing Agreement, and the security documentation indicates the availability of the GDPR DPA within the Enterprise Hub subscription. For Serverless Inference APIs, Inference Providers, and Inference Endpoints, Hugging Face declares that it does not use payloads or tokens from requests to train models. In the Inference API, user data is not stored for training; tokens may be cached briefly, and logs (without additional data) are retained for up to 30 days for debugging purposes. The security documents for Inference API/Endpoints declare no use of payloads for training and no storage of request/response content, and the DPA defines Hugging Face as a processor processing data only in accordance with client instructions.You retain ownership of the Content you upload to the Hub; Hugging Face does not own the models or data you upload to the Hub. You grant Hugging Face the license necessary to host and provide the Services and, for Content you make publicly available, the license to allow other users to use the Content consistent with the functionality of the Service and the selected repository license. The customer retains rights to models and data uploaded to the Hub (including within the Enterprise plan), and HF does not become the owner of this Content. Ownership and licensing issues in Enterprise agreements may be further clarified contractually (Supplemental Terms, DPA), but publicly available documents do not contain separate rules regarding copyright for model output in Enterprise plans.Limitation of Liability in the Terms: For free services, the liability limit may be up to $50 (the clause in the Terms that sets the limit for free services). Furthermore, the services are provided "as is" without warranty. The general liability limit is the amount the customer has paid in the previous 12 months (excluding damages resulting from fraud, gross negligence, criminal acts, etc.). Public repositories: 'best effort' with a typical limit of up to 5 TB for a free user/org; private repositories have separate limits (e.g., 100 GB for free, 1 TB/seat for Team/Enterprise) and paid extensions. - Recommended technical limits (docs): files in the repository < ~100k, entries in a folder < ~10k, it is better to split a single file (documentation suggests limits/practices; maximum size of a single file is practically limited, chunking recommended). - Team/Enterprise offers, among others, Storage Regions (US/EU, APAC 'coming soon'), advanced security features, SOC2 Type 2, DPA (as part of the Enterprise Plan), and extended storage pools (e.g., 200 TB + 1 TB/seat for Enterprise). - For Enterprise, Supplemental Terms and Data Processing Addendum (DPA) are available, as well as data residency options (US/EU/APAC).https://huggingface.co/terms-of-service https://huggingface.co/privacy https://huggingface.co/docs/hub/security https://huggingface.co/docs/hub/storage-limits https://huggingface.co/docs/hub/storage-regions https://huggingface.co/docs/hub/enterprise-hub https://huggingface.co/docs/hub/api-inference/security https://huggingface.co/docs/inference-endpoints/security https://huggingface.co/docs/inference-providers/security https://cdn-media.huggingface.co/landing/assets/Data+Processing+Agreement.pdf https://cdn-media.huggingface.co/landing/assets/Supplemental+Terms+-+Enterprise+Hub.pdf https://huggingface.co/terms-of-service
25
Google Cloud AI (Vertex AI) AI infrastructure Global; no specific location guaranteed. Data may circulate throughout Google's infrastructure for optimization purposes. 42 operational regions (as of 2025); 49 planned by the end of 2025. Possibility of strict residency in the US or EU. Standard. Google acts as a controller; data may be subject to human review.Fully via CDPA (Data Processing Annex). Google as Processor. Formal "opt-in" required in the console. Possible. Data is used to improve services and models, unless local regulations require otherwise. None. SST Section 17: "Training Restriction" applies. Google does not train underlying models on Customer Data. You retain your rights but grant Google a broad license to use and modify the content. Full transfer. Output is defined as "Customer Data" and belongs solely to the customer. Disabled; services are provided "as-is." User assumes full risk of errors and infringements of third-party rights. Two-stage Indemnity system (compensation for training data and generated output).Vertex AI offers data residency controls and retention minimization options (e.g., disabling caching, 'zero data retention'). Data training policy varies depending on whether the customer is using Google's public models or Vertex AI services under an enterprise agreement; please check the specific clauses (Training Restriction) in the Service Terms and the customer agreement. Grounding with Search enforces 30 days of data retention. Zero Data Retention requires disabling Abuse Monitoring. https://cloud.google.com/vertex-ai/docs/generative-ai/overview https://cloud.google.com/vertex-ai/docs/generative-ai/data-residency https://cloud.google.com/vertex-ai/docs/generative-ai/customer-data-retention https://cloud.google.com/terms/service-terms https://cloud.google.com/terms/service-terms https://cloud.google.com/terms/data-processing-addendum
26
OVHcloud AI Endpoints AI infrastructure Gravelines (France) and Beauharnois (Canada).As in free or optional Private Cloud (France/Germany/UK/Canada) or On-Premises (OPCP). Full compliance via standard DPA; ISO 27001, 27017, 27018, 27701; HDS from 28/01/2026. All of the above plus SecNumCloud-ready (full AI qualification by end of 2026); HDS Addendum option. Absolutely none. The Zero Retention policy prohibits storing data for any purpose other than reference. Absolutely none, unless the customer purchases AI Training to build their own models (full isolation). Customer retains ownership of the content/results it submits and generates, subject to the terms of service and any specific agreements. The client retains rights to the results generated from its input; corporate agreements may include tailored intellectual property clauses.OVHcloud's Terms of Service limit liability for service results/errors. Users are responsible for verifying results before using them. SLA credits (10-25%) up to 30% of the invoice amount; liability limits up to the contract value of 6/12 months. - Zero retention is declared for default SaaS AI Endpoints in OVHcloud product materials; separation and additional guarantees are provided for enterprise/private deployments. - Certifications and compliance: OVHcloud publicly lists ISO/IEC 27001, 27017, 27018, 27701 compliance and SOC 1/2/3 reports; SecNumCloud and HDS are available for selected offers. - Licenses/IP and liability: General terms and conditions are set out in the Terms of Service; specific IP and liability provisions depend on the plan and enterprise agreement.https://www.ovhcloud.com/en/public-cloud/ai-endpoints/ , https://www.ovhcloud.com/en/terms-of-service/ https://www.ovhcloud.com/en/about-us/data-sovereignty/ https://us.ovhcloud.com/legal/service-specific-terms/ https://us.ovhcloud.com/legal/data-processing-agreement/ https://www.ovhcloud.com/en/personal-data-protection/legal-privacy-security/
27
Microsoft Azure AI AI infrastructure The region selected by the user when creating the resource, with guaranteed residency in the EU Data Boundary (Phase 2) for EEA customers. Over 60 regions with full EU Data Boundary (Phase 3) support and Advanced Data Residency (ADR) options. Azure provides tools and commitments to enable GDPR compliance. Full compliance through Microsoft Online Services DPA; Microsoft acts as a processor. Full DPA compliance, support for SOC 2 Type 2 audits and the ability to sign BAAs (e.g. HIPAA).Microsoft does not use customer data from Azure services (including Azure AI/Azure OpenAI services) to train public models or improve global models without explicit customer consent. Customer data is logically isolated and is not used to improve global models or other customers. Customer retains entry rights and receives exit rights as "Customer Data" in accordance with the Product Terms. Full transfer of rights; Customer Copyright Commitment (CCC) protection provides defense against third-party IP claims. Standard disclaimers, warranty limitations and liability limits described in the terms and conditions apply (service provided "AS IS"). Liability limited to the amount paid for the service within 12 months; negotiable in EA.Azure offers the widest selection of regions (60+), EU Data Boundary options for select products, and Advanced Data Residency (ADR) capabilities—ADR is an add-on for customers with specific data localization requirements (more commonly described in the context of Microsoft 365, but Azure has its own data residency options). AI services (e.g., Azure OpenAI, Foundry) have additional data processing policies and separate terms (e.g., "do not use for training" for customer data, DPA, Product Terms). Abuse monitoring includes 30-day data retention, which enterprise customers can disable (Modified Abuse Monitoring). https://azure.microsoft.com/en-us/support/legal/ https://www.microsoft.com/en-us/licensing/how-to-buy/microsoft-customer-agreement https://www.microsoft.com/licensing/docs/view/microsoft-products-and-services-data-protection-addendum-dpa https://www.microsoft.com/licensing/terms/product/PrivacyandSecurityTerms/all
28
Replicate AI infrastructureMainly US (AWS/Google Cloud); after 11/2025 data can pass through any Cloudflare edge nodes without any residency guarantee. Ability to enforce EU residency (e.g., Frankfurt, Germany) on Cloudflare Workers AI infrastructure and dedicated AWS instances. Declarative compliance; no public DPA form for consumers; transfer to the US based on SCCs. Full contractual compliance by signing a dedicated DPA (Data Processing Addendum) with auditable TOMs. The User grants Replicate a non-exclusive, royalty-free license to use, store and process Customer Data to the extent necessary to provide the Services, including model training and creating so-called Customer Derivative Models. Default: yes (section 5.2 gives Replicate the right to use the data for training and derivative purposes).Rights transfer to the user unless third-party model licensing restrictions apply. Specific models (e.g., Flux Dev) are commercially licensed only when used on the platform. Full transfer of copyright without license back to Replicate for purposes other than technical support. Services and output are provided "AS IS," without warranty. Limited to the lower of 6 months' worth of payments or $100; no guarantee of results (as is). Replicate assumes no responsibility for model errors or the consequences of using output. Limitations as in the general Terms and Conditions (warranty exclusions, liability limit to amounts paid in 6 months or USD 100).Replicate was acquired by Cloudflare in November 2025 and is being integrated with Cloudflare solutions (including Workers AI / edge inference). The billing model includes both pay-as-you-go (billing based on usage) and prepaid/payment-in-arrears models described in the Terms. Please note the Additional Terms for external models—e.g., Stability AI, Ideogram, Black Forest Labs (Flux) restrictions on distribution, derivatives, and special uses. Specific revenue thresholds ($1 million) for Stability AI 3.5 models. https://replicate.com/terms https://replicate.com/privacy
29
Together AI AI infrastructure Defaults to the United States; the platform uses a distributed network of over 25 data centers, with the ability to select EU regions for specific API services. Dedicated data centers in the UK, France, Spain, Portugal, Italy, and Iceland; VPC option on AWS/GCP/Azure.Basic compliance ensured by the Privacy Policy and Standard Contractual Clauses (SCCs) when transferring data outside the EEA. Full contractual compliance through DPA, EU data residency options, and Sovereign AI implementations within the customer infrastructure. No model training on user data without explicit consent (opt-in); Zero Data Retention (ZDR) feature available. A complete ban on using data to train base models, guaranteed by provisions in the DPA and environmental isolation in the VPC. You own all rights to the content you enter (Input) and the results you generate (Output) in accordance with Section 7.2 of the ToS. The Terms of Service (Section 7.2) states that the customer owns the rights to Your Content and Output. Additional ownership provisions may be included in the Order Form/Addendum. Limited to 12 months of total fees; service provided "as is" with no guarantee of results.Limited to 12 months of fees; SLA (99.9%) and indemnity up to $1,000,000 available under negotiated terms. Together offers configurable Zero Data Retention (ZDR) settings; data residency and dedicated deployments are available for enterprise customers (VPC/custom deployments); the company declares a security program compliant with industry standards. The website mentions SOC 2 and data residency capabilities in Europe (UK, Spain, France). The platform is SOC 2 Type II certified; Together Turbo optimization provides up to 75% faster inference than PyTorch. https://www.together.ai/terms-of-service https://www.together.ai/privacy https://www.together.ai/enterprise
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100