| A | B | C | D | E | F | G | H | I | J | K | L | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | AA | AB | AC | AD | AE | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | AI SECURITY AUDIT CHECKLIST | |||||||||||||||||||||||||||||
2 | Name of Organisation: | CryptoVault Nigeria Limited | ||||||||||||||||||||||||||||
3 | Address : | 24 Bishop Aboyade Cole Street, Victoria Island, Lagos, Nigeria | ||||||||||||||||||||||||||||
4 | RC No: 148552 | |||||||||||||||||||||||||||||
5 | Sector: | Cryptocurrency Exchange & Digital Asset Management | ||||||||||||||||||||||||||||
6 | AUDIT PARAMETERS | |||||||||||||||||||||||||||||
7 | Audit Area | Domains | References to standards and frameworks | Response | ||||||||||||||||||||||||||
8 | ||||||||||||||||||||||||||||||
9 | Domain 1: AI Governance and Accountability | |||||||||||||||||||||||||||||
10 | Organizational Structure | Is there a formally approved AI Governance Policy that defines the organization's risk appetite, ethical principles, and accountability structure for AI systems? | ISO/IEC 42001:2023 (Clause 5.2, 5.3 - Policy and Roles); NIST AI RMF (Govern function). | No formal policy yet. Draft in progress, expected approval by end of April 2026. | ||||||||||||||||||||||||||
11 | Roles & Responsibilities | Are roles, responsibilities, and authorities for AI system ownership, risk management, and compliance clearly defined, documented, and communicated across the organization? | ISO/IEC 27002:2022 (5.2 - Roles and responsibilities); SOC 2 (Control Environment). | Partially defined informally. No formal RACI matrix. Teams know their roles but not documented. | ||||||||||||||||||||||||||
12 | Ethical Principles | Has the organization established and documented a set of ethical principles for AI use, and are these principles integrated into the AI system design and review process? | NIST AI RMF (Govern function, specifically addressing societal and ethical risks); ISO/IEC 42001:2023 (6.1.2 - AI system impact assessment). | No documented ethical principles. Not yet considered in design/review process. | ||||||||||||||||||||||||||
13 | ||||||||||||||||||||||||||||||
14 | Domain 2: AI Risk Management and Oversight | |||||||||||||||||||||||||||||
15 | Risk Assessment | Is a formal, documented AI-specific risk assessment performed for all new and significantly modified AI systems, covering technical, ethical, legal, and operational risks? | ISO/IEC 23894:2023 (AI risk management principles); NIST AI RMF (Map and Measure functions). | No formal AI risk assessments. Risks discussed in product meetings but not documented. | ||||||||||||||||||||||||||
16 | Risk Treatment | Are identified AI risks treated with appropriate controls, and is the effectiveness of these controls periodically reviewed and documented? | ISO/IEC 27001:2022 (6.1.3 - Statement of Applicability); NIST SP 800-37 (Risk Management Framework). | No formal risk treatment. Some ad-hoc controls exist but effectiveness not reviewed. | ||||||||||||||||||||||||||
17 | Impact Assessment | Does the organization conduct an AI System Impact Assessment (AIA) to evaluate potential negative impacts on individuals, groups, and fundamental rights before deployment? | NDP Act (Section 28 - Data Privacy Impact Assessment, extended for AI); ISO/IEC 42001:2023 (6.1.2 - AI system impact assessment). | Not conducted. Team unaware of NDP Act requirements for AI impact assessments. | ||||||||||||||||||||||||||
18 | ||||||||||||||||||||||||||||||
19 | Domain 3: Model Lifecycle Security and Change Management | |||||||||||||||||||||||||||||
20 | Secure MDLC | Are security requirements (e.g., threat modeling, secure coding practices) integrated into each phase of the Model Development Lifecycle (MDLC), from design to deployment? | NIST SP 800-218 (Secure Software Development Framework); ISO/IEC 27002:2022 (8.28 - Secure coding). | Not formally integrated. Security team consulted ad-hoc for major releases only. | ||||||||||||||||||||||||||
21 | Model Versioning | Is a robust model versioning and artifact management system in place to ensure traceability, reproducibility, and integrity of all model components (code, data, configuration)? | ISO/IEC 42001:2023 (8.3.3 - Traceability of AI systems); SOC 2 (System Operations). | Yes, we use MLflow for versioning. But integrity controls like checksums not implemented. | ||||||||||||||||||||||||||
22 | Change Management | Is there a formal change management process for model updates, retraining, and deployment that includes security review, testing, and rollback capabilities? | ISO/IEC 27002:2022 (8.32 - Change management); NIST SP 800-53 (CM-3 - Configuration Change Control). | Informal process. Changes approved by lead data scientist. Security review not mandatory. Rollback possible manually. | ||||||||||||||||||||||||||
23 | ||||||||||||||||||||||||||||||
24 | Domain 4: Training, Validation, and Inference Data Security | |||||||||||||||||||||||||||||
25 | Data Provenance | Is the provenance (source, collection method, licensing) of all training and validation data documented, and are controls in place to prevent the use of unauthorized or poisoned data? | ISO/IEC 42001:2023 (8.3.2 - Data for AI systems); OWASP Top 10 for LLMs (TDP - Training Data Poisoning). | Partial documentation. KYC data sourced internally OK. Some external sentiment datasets have unclear licensing. | ||||||||||||||||||||||||||
26 | Data Privacy | Are appropriate privacy-enhancing technologies (PETs) or anonymization techniques applied to sensitive data before it is used for model training, and is access strictly controlled? | NDP Act (Principle of Data Minimisation and Pseudonymisation); ISO/IEC 27002:2022 (8.10 - Data masking). | Anonymization applied inconsistently. KYC images stored raw. Access controls strong but data not masked. | ||||||||||||||||||||||||||
27 | Inference Data | Are input and output data at the inference stage validated for integrity and sanitized to prevent injection attacks or data leakage? | OWASP Top 10 for LLMs (PI - Prompt Injection; IO - Insecure Output Handling); NIST SP 800-53 (SI-10 - Information Input Validation). | Basic input validation implemented. No specific sanitization for prompt injection risks. | ||||||||||||||||||||||||||
28 | ||||||||||||||||||||||||||||||
29 | Domain 5: Model Integrity, Robustness, and Adversarial Resistance | |||||||||||||||||||||||||||||
30 | Adversarial Testing | Are AI systems subjected to dedicated adversarial robustness testing (e.g., evasion, poisoning, model inversion attacks) before deployment, and are mitigation strategies documented? | OWASP Top 10 for LLMs (MA - Model Abuse; MI - Model Denial of Service); NIST AI RMF (Measure function - Robustness). | Never performed. Team unaware of adversarial testing techniques. | ||||||||||||||||||||||||||
31 | Model Explainability | Are model explainability (XAI) techniques implemented and validated to ensure that model decisions can be understood, debugged, and audited for bias or security flaws? | ISO/IEC 42001:2023 (8.3.4 - Transparency and explainability); NIST AI RMF (Measure function - Interpretability). | Not implemented. Models treated as black boxes. Unable to explain specific decisions. | ||||||||||||||||||||||||||
32 | Bias and Fairness | Are systematic bias and fairness assessments conducted on the model and its training data, and are documented efforts made to mitigate unfair outcomes? | NIST AI RMF (Measure function - Fairness); Regulatory Compliance (Emerging AI-specific regulations). | No formal assessments. Assumed models are fair without validation. | ||||||||||||||||||||||||||
33 | ||||||||||||||||||||||||||||||
34 | Domain 6: Access Control, Identity, and Privilege Management | |||||||||||||||||||||||||||||
35 | Least Privilege | Is the principle of least privilege strictly enforced for all human and service accounts accessing AI training environments, model repositories, and production endpoints? | ISO/IEC 27002:2022 (5.15 - Access control); NIST SP 800-53 (AC-6 - Least Privilege). | Mostly yes. RBAC implemented in AWS. Quarterly access reviews conducted. Some developer accounts have excess permissions. | ||||||||||||||||||||||||||
36 | Service Account Security | Are service accounts and API keys used by AI pipelines (e.g., for data access, model deployment) managed securely, including rotation, vaulting, and non-use of default credentials? | ISO/IEC 27002:2022 (5.18 - Access rights review); SOC 2 (Security - Logical Access Controls). | Yes. AWS Secrets Manager used. Keys rotated every 90 days. Default credentials disabled. | ||||||||||||||||||||||||||
37 | Authentication | Is multi-factor authentication (MFA) enforced for all administrative and privileged access to AI development and production environments? | ISO/IEC 27002:2022 (5.17 - Authentication information); NIST SP 800-63B (Digital Identity Guidelines). | Yes. MFA mandatory for all AWS access. Enforced via IdP. | ||||||||||||||||||||||||||
38 | ||||||||||||||||||||||||||||||
39 | Domain 7: Infrastructure, Cloud, and MLOps Security | |||||||||||||||||||||||||||||
40 | Network Segmentation | Are AI development, training, and production environments logically and physically separated (e.g., via VPCs, subnets, firewalls) from the corporate network and each other? | ISO/IEC 27002:2022 (7.2 - Network security); NIST SP 800-53 (SC-7 - Boundary Protection). | Yes. Dev, training, prod in separate AWS VPCs. Strict firewall rules. No direct corporate network access. | ||||||||||||||||||||||||||
41 | Configuration Hardening | Are all compute resources (e.g., VMs, containers, Kubernetes clusters) used for AI workloads hardened according to established security baselines (e.g., CIS benchmarks)? | ISO/IEC 27002:2022 (8.9 - Configuration management); NIST SP 800-70 (Security Configuration Checklists). | Partially. EC2 instances follow CIS. EKS clusters not fully hardened. Vulnerability scans run monthly. | ||||||||||||||||||||||||||
42 | MLOps Pipeline Security | Is the MLOps CI/CD pipeline secured against tampering, ensuring that only authorized, scanned, and tested code/models can be deployed to production? | OWASP Top 10 for LLMs (SCV - Supply Chain Vulnerabilities); NIST SP 800-218 (Secure Software Development Framework). | Basic controls in place. SAST/DAST implemented. Model scanning not yet integrated. Artifacts not signed. | ||||||||||||||||||||||||||
43 | ||||||||||||||||||||||||||||||
44 | Domain 8: Monitoring, Logging, and Incident Response | |||||||||||||||||||||||||||||
45 | Model Monitoring | Are AI models continuously monitored in production for performance drift, data drift, concept drift, and security anomalies (e.g., sudden changes in input patterns)? | ISO/IEC 42001:2023 (8.3.5 - Monitoring and measuring of AI system performance); NIST AI RMF (Maintain function). | Yes. Transaction Monitoring AI monitored for drift. Alerts configured. KYC and Sentiment models not monitored. | ||||||||||||||||||||||||||
46 | Security Logging | Are comprehensive security logs (including input/output data, access attempts, and administrative actions) generated, protected from tampering, and retained according to policy? | ISO/IEC 27002:2022 (8.15 - Logging); SOC 2 (Security - Monitoring Activities). | Yes. All logs sent to SIEM. Log retention 12 months. Integrity controls not implemented. | ||||||||||||||||||||||||||
47 | Incident Response | Does the incident response plan include specific playbooks and procedures for handling AI-specific incidents, such as model poisoning, adversarial attacks, or bias-related failures? | ISO/IEC 27002:2022 (5.26 - Incident management); NIST SP 800-61 (Computer Security Incident Handling Guide). | No. General IR plan exists. No AI-specific playbooks. Never tested AI incident scenarios. | ||||||||||||||||||||||||||
48 | ||||||||||||||||||||||||||||||
49 | Domain 9: Third Party, Open Source, and Supply Chain Risk | |||||||||||||||||||||||||||||
50 | Third-Party Models | Is a due diligence process performed on all third-party or pre-trained models (e.g., LLMs, foundation models) to assess their security, provenance, and compliance with organizational standards? | ISO/IEC 27002:2022 (5.21 - Managing information security in the supply chain); NIST AI RMF (Govern function - Supply Chain). | No. Sentiment analysis LLM imported from Hugging Face. No security review performed. | ||||||||||||||||||||||||||
51 | Open Source Security | Are automated tools used to scan open-source libraries and dependencies for known vulnerabilities (CVEs) and license compliance before they are integrated into the AI system code base? | OWASP Top 10 for LLMs (SCV - Supply Chain Vulnerabilities); NIST SP 800-161 (Supply Chain Risk Management). | Not yet. Tool identified but not implemented. Current process relies on manual review. | ||||||||||||||||||||||||||
52 | Data Provider Risk | Are contractual and technical controls in place to ensure that external data providers maintain the security and integrity of the data they supply for AI training? | ISO/IEC 27002:2022 (5.23 - Information security for use of cloud services); NDP Act (Data Processor requirements). | No formal agreements. External sentiment data purchased via online transaction. No security clauses. | ||||||||||||||||||||||||||
53 | ||||||||||||||||||||||||||||||
54 | Domain 10: Regulatory Compliance, Ethics, and Responsible AI | |||||||||||||||||||||||||||||
55 | Regulatory Mapping | Is there a current inventory of all applicable AI-specific regulations (e.g., EU AI Act, state-level laws) and a documented mapping of these requirements to internal controls? | NDP Act (Lawfulness, fairness, and transparency); Emerging AI-specific regulations (e.g., EU AI Act requirements for high-risk systems). | Yes. SEC, NDP Act, NFIU mapped. EU AI Act not applicable. Controls partially mapped. | ||||||||||||||||||||||||||
56 | Responsible AI | Are processes in place to continuously monitor AI systems for unintended consequences, societal impacts, and deviations from responsible AI principles post-deployment? | NIST AI RMF (Govern and Map functions - Societal Impact); ISO/IEC 42001:2023 (6.1.2 - AI system impact assessment). | No formal processes. No responsible AI principles defined. Ad-hoc reviews only. | ||||||||||||||||||||||||||
57 | Audit Trail | Is a complete and immutable audit trail maintained for all significant decisions and actions related to the AI system (e.g., model selection, parameter tuning, data filtering)? | ISO/IEC 42001:2023 (8.3.3 - Traceability of AI systems); SOC 2 (Control Activities) | Partial. MLflow tracks experiments. Audit trail exists but not immutable. Some decisions undocumented. | ||||||||||||||||||||||||||
58 | ||||||||||||||||||||||||||||||
59 | ||||||||||||||||||||||||||||||
60 | ||||||||||||||||||||||||||||||
61 | ||||||||||||||||||||||||||||||
62 | ||||||||||||||||||||||||||||||
63 | ||||||||||||||||||||||||||||||
64 | ||||||||||||||||||||||||||||||
65 | ||||||||||||||||||||||||||||||
66 | ||||||||||||||||||||||||||||||
67 | ||||||||||||||||||||||||||||||
68 | ||||||||||||||||||||||||||||||
69 | ||||||||||||||||||||||||||||||
70 | ||||||||||||||||||||||||||||||
71 | ||||||||||||||||||||||||||||||
72 | ||||||||||||||||||||||||||||||
73 | ||||||||||||||||||||||||||||||
74 | ||||||||||||||||||||||||||||||
75 | ||||||||||||||||||||||||||||||
76 | ||||||||||||||||||||||||||||||
77 | ||||||||||||||||||||||||||||||
78 | ||||||||||||||||||||||||||||||
79 | ||||||||||||||||||||||||||||||
80 | ||||||||||||||||||||||||||||||
81 | ||||||||||||||||||||||||||||||
82 | ||||||||||||||||||||||||||||||
83 | ||||||||||||||||||||||||||||||
84 | ||||||||||||||||||||||||||||||
85 | ||||||||||||||||||||||||||||||
86 | ||||||||||||||||||||||||||||||
87 | ||||||||||||||||||||||||||||||
88 | ||||||||||||||||||||||||||||||
89 | ||||||||||||||||||||||||||||||
90 | ||||||||||||||||||||||||||||||
91 | ||||||||||||||||||||||||||||||
92 | ||||||||||||||||||||||||||||||
93 | ||||||||||||||||||||||||||||||
94 | ||||||||||||||||||||||||||||||
95 | ||||||||||||||||||||||||||||||
96 | ||||||||||||||||||||||||||||||
97 | ||||||||||||||||||||||||||||||
98 | ||||||||||||||||||||||||||||||
99 | ||||||||||||||||||||||||||||||
100 | ||||||||||||||||||||||||||||||