WithPCI Logo
WithPCI.com

AI Acceptable Use Policy Template

Document Information Details
Company Name [Company Name]
Effective Date [Date]
Version [Version Number, e.g., 1.0]
Policy Owner [CISO/IT Director or Chief Compliance Officer]
Document Classification Confidential / Internal Use Only
Parent Policy Information Security Policy

Purpose

This policy establishes the guidelines and acceptable use parameters for utilizing Artificial Intelligence (AI) and Large Language Model (LLM) technologies within [Company Name]'s operations. The purpose is to enable responsible innovation and efficiency gains while mitigating risks related to data confidentiality, security, accuracy, intellectual property, ethical considerations, and regulatory compliance, including specific requirements related to PCI DSS Req 4.0.1.


Scope

This policy applies to all employees, contractors, consultants, temporary staff, and any other individuals or processes utilizing AI/LLM tools while performing work for or on behalf of [Company Name]. This includes the use of publicly available AI tools (e.g., ChatGPT, Bard, Midjourney), company-provided AI tools, AI features embedded within approved software, and internally developed AI applications. It covers all interactions with AI/LLMs, including data input, prompt engineering, and the use of generated output, especially concerning sensitive company data such as Cardholder Data (CHD), Personally Identifiable Information (PII), Intellectual Property (IP), and confidential business information.


Roles and Responsibilities

Role/Group Key Responsibilities
Executive Management Provide oversight for AI strategy and risk management; Establish acceptable risk tolerance for AI use; Ensure alignment with business objectives and ethical principles.
CISO / IT Director / Compliance Officer Own and approve this policy; Oversee the governance framework for AI use; Ensure compliance with relevant regulations and security standards; Manage the AI tool approval process.
Information Security Team Assess security risks of AI tools; Define security requirements for approved tools; Monitor usage for policy violations and security incidents; Provide guidance on secure AI interaction.
Legal Counsel Advise on legal risks, compliance obligations (privacy laws, IP rights), and contractual terms related to AI tools; Review AI tool terms of service.
IT Operations Manage deployment and access controls for company-approved AI tools; Assist in monitoring AI usage on company networks/devices.
AI Governance Committee (Optional - or assigned to existing committee) Review and approve requests for using new AI tools or use cases; Maintain the list of approved AI tools; Evaluate risks associated with specific AI applications.
Business Unit Leaders / Department Heads Identify potential AI use cases within their teams; Ensure team members are trained on this policy; Request approval for new AI tools/uses required by their teams.
All Users (Employees, Contractors, etc.) Comply with this policy; Use AI tools responsibly and ethically; Protect sensitive company data during AI interactions; Complete required training; Report concerns or misuse of AI tools.

Policy Requirements

1. General Principles of AI Use

  • Accountability: Users are responsible for their use of AI tools and the outputs generated while performing company duties.
  • Compliance: All use of AI must comply with applicable laws, regulations (including data privacy laws like GDPR, CCPA), contractual obligations, and all other company policies.
  • Transparency: Be transparent about the use of AI in interactions or outputs where appropriate and required (e.g., customer service chatbots must identify themselves as AI).
  • Ethical Use: AI tools must not be used for unethical purposes, including generating discriminatory content, spreading misinformation, infringing on rights, or engaging in malicious activities.

2. Data Handling & Confidentiality in AI Interactions

  • Prohibited Data Input: Strictly prohibit entering, uploading, or pasting any of the following sensitive data types into public or non-approved AI/LLM tools:
    • Cardholder Data (CHD) - including Primary Account Number (PAN), expiration date, cardholder name.
    • Sensitive Authentication Data (SAD) - including CVV2, track data, PINs (Note: SAD must never be stored post-authorization per PCI DSS).
    • Personally Identifiable Information (PII) of customers, employees, or partners (e.g., names combined with SSN, driver's license, health info, financial account numbers).
    • Protected Health Information (PHI).
    • Non-public financial data of the company or its clients.
    • Proprietary source code, algorithms, or trade secrets.
    • Confidential business strategies, M&A information, legal documents under privilege.
    • Internal security configurations, vulnerabilities, or incident details.
    • Any data classified as "Restricted" or "Confidential" under the Data Classification scheme, unless using an explicitly approved internal tool designed and secured for that purpose.
  • Secure Channels: Interactions with approved internal AI tools that may process potentially sensitive internal data must occur over secure, encrypted channels and within secured company environments.
  • Anonymization/Generalization: When seeking assistance from AI, generalize queries and remove specific sensitive details whenever possible, even for internal or less sensitive data types.

3. Approved AI Tools and Use Cases

  • Approval Required: The use of any AI/LLM tool (including browser extensions, embedded features, standalone applications) for company purposes, beyond general web searches on public data, requires formal approval. Approval may be granted for specific tools, specific use cases, or both.
  • Vetting Process: Requests for new AI tools or significant new use cases must be submitted to the [AI Governance Committee or designated approval body, e.g., Information Security]. The vetting process will include:
    • Security assessment (data handling practices, encryption, access controls, vendor security posture).
    • Legal/Compliance review (terms of service, data privacy implications, IP rights).
    • Risk assessment (potential for data leakage, inaccurate output, bias).
    • Alignment with business need and ethical principles.
  • Approved List: Maintain and communicate a list of approved AI tools and potentially restricted or explicitly prohibited tools. Use of tools not on the approved list for company work is forbidden without specific exception.
  • PCI DSS Consideration: AI tools intended to process, store, transmit, or analyze CHD, or those used for security functions within the CDE (e.g., AI-driven threat detection), require rigorous assessment and may necessitate customized validations under PCI DSS vReq 4.0. These tools must meet all relevant PCI DSS requirements.

4. Output Usage and Verification

  • Verification Responsibility: Users are solely responsible for verifying the accuracy, appropriateness, and security of any output generated by AI tools before using it for company purposes. AI outputs may contain errors, biases, or outdated information.
  • Critical Decisions: Do not rely solely on AI output for critical decisions, financial reporting, legal advice, medical assessments, or actions with significant consequences. AI should be used as a support tool, with human oversight and judgment applied.
  • Confidentiality of Output: Treat AI-generated output containing potentially sensitive derivative information with the same level of confidentiality as the input data classification would require.
  • Plagiarism and Originality: Ensure that AI-generated content used externally is appropriately attributed (if required by terms or best practice) and does not constitute plagiarism. Review outputs for originality where necessary.

5. Intellectual Property (IP) and Ownership

  • Input Data: Do not input third-party copyrighted material or confidential information belonging to others into AI tools without proper authorization.
  • Output Ownership: Understand the terms of service of any AI tool regarding the ownership of generated content. Content generated using company resources or related to company business is generally considered company property, subject to the tool's licensing terms.
  • Infringement Risk: Be aware that AI-generated content may inadvertently infringe on existing copyrights or patents. Use generated content responsibly and seek legal guidance if using it in high-risk contexts (e.g., product design, core marketing materials).

6. Security of AI Interactions

  • Secure Access: Access company-approved or internal AI tools using assigned unique credentials and MFA where required.
  • Secure Environments: Interact with AI tools only from company-managed devices or approved BYOD devices that meet company security standards. Avoid using public or untrusted computers or networks for sensitive AI interactions.
  • Prompt Injection / Malicious Use: Be aware of risks like prompt injection attacks. Do not attempt to bypass security filters or use AI tools to generate malicious code, exploit vulnerabilities, or conduct harmful activities.

7. Monitoring, Auditing, and Data Retention

  • Monitoring: [Company Name] reserves the right to monitor the use of AI tools on company networks and devices for security, compliance, and policy adherence purposes. This may include logging prompts entered into company-managed AI tools.
  • Data Retention: Avoid storing sensitive AI prompts or outputs locally or in cloud storage unless necessary for a documented business purpose and permitted by the Data Retention Policy. Securely delete or anonymize interaction logs according to data retention schedules. Public AI tool chat histories should generally be disabled or regularly cleared.
  • Auditing: Usage logs and compliance with this policy may be subject to periodic internal or external audits.

8. Training and Awareness

  • All employees and relevant contractors must complete mandatory training on this AI Acceptable Use Policy upon hire/onboarding and at least annually thereafter.
  • Training shall cover:
    • Risks associated with AI use (data leakage, inaccuracy, bias, security, IP).
    • Policy requirements, especially regarding sensitive data input.
    • Approved tools and use cases.
    • User responsibilities for verification and secure usage.
    • Reporting procedures for concerns or misuse.

Enforcement

  • Personnel found to have violated this policy may be subject to disciplinary action, up to and including termination of employment or contract, in accordanceance with established HR policies and contractual agreements. Violations may also incur civil or criminal penalties depending on the nature of the infraction.
  • Access to AI tools or company systems may be revoked due to policy violations.
  • Any vendor, consultant, or contractor found to have violated this policy may be subject to sanctions up to and including removal of access rights, termination of contract(s), and related civil or criminal penalties.
  • Exceptions to this policy require a documented business justification, formal risk assessment, implementation of compensating controls (if applicable), and written approval from the CISO/IT Director and potentially Legal/Executive Management. Exceptions must be time-bound and reviewed regularly.

Revision History

Version Date Author Change Details
1.0 [Date] [Author Name] Initial policy release
[Ver #] [Date] [Author Name] [Summary of changes]

Approval

Name Title Signature Date
[Exec Name] [Executive Title, e.g., CEO] [Date]
[CISO Name] [CISO/IT Director Title] [Date]

Appendix A: Examples of Acceptable and Unacceptable AI Use

Acceptable Use Examples Unacceptable Use Examples
Brainstorming marketing slogans using approved public LLM. Pasting customer email content containing PII into ChatGPT to draft a reply.
Summarizing publicly available research papers. Uploading a confidential company financial report to an online AI analysis tool.
Generating code snippets for non-sensitive utility functions (review required). Inputting proprietary source code into a public AI code generation tool for refactoring.
Drafting generic internal communication templates. Using an AI tool to generate performance reviews containing specific employee feedback data.
Asking general questions about programming concepts. Asking an LLM "What are the security flaws in [Company Name]'s firewall configuration?"
Using company-approved internal AI for analyzing anonymized sales data. Inputting Cardholder Data (PAN, CVV2) into any AI tool, public or internal, unless explicitly approved and secured for PCI DSS compliance (e.g., tokenized input to a validated system).
Generating presentation outlines based on public information. Creating deepfake images or voice clones of colleagues or customers without consent.

Appendix B: AI Tool Approval Request - Key Information

Requests for using new AI tools should include:

  • Tool Name & Vendor:
  • Intended Use Case(s): Specific tasks and business justification.
  • Data Involved: Types of data expected to be input or potentially exposed.
  • Public or Private Tool? Is it a generally available web service or a private/enterprise instance?
  • Terms of Service/Privacy Policy Link:
  • Known Security Features: (e.g., encryption, access controls, data retention policies of the tool).
  • Potential Risks Identified: (e.g., data privacy, accuracy, IP concerns).
  • Requesting Department/User:
  • Proposed Controls/Usage Guidelines: How will risks be mitigated?

Appendix C: PCI DSS 4.0.1 Considerations for AI Use

  • Data Input: Never input PAN or SAD into unapproved AI tools. AI tools used within the CDE or processing CHD must meet all applicable PCI DSS requirements (Reqs 3, 4, 6, 8, 10, 11, 12).
  • Secure Configuration: If hosting AI tools internally that interact with the CDE, they must be hardened, patched, monitored, and access-controlled per PCI DSS (Reqs 2, 6, 8, 10).
  • Vendor Due Diligence: If using a third-party AI platform that could impact CDE security, the TPSP Management Policy applies, including assessing their PCI DSS compliance (Req 12.8, Req 12.9).
  • AI for Security Functions: If AI is used for security controls required by PCI DSS (e.g., AI-driven IDS, FIM analysis, log monitoring), its effectiveness must be validated, potentially requiring Customized Validation under vReq 4.0 (Reqs 5, 10, 11). The AI system itself becomes in-scope.
  • Risk Assessment: The use of AI, especially near sensitive data environments, must be included in the annual PCI DSS risk assessment (Req 12.2).
  • Training: Security awareness training must cover risks associated with AI use, especially regarding sensitive data handling (Req 12.6).

Your perspective on this PCI DSS requirement matters! Share your implementation experiences, challenges, or questions below. Your insights help other organizations improve their compliance journey and build a stronger security community.Comment Policy