ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
AI Risk Assessement
2
NOTE: PROTOTYPE/ WORKING DRAFT
3
TaskTypeBiggest Area(s) of riskThemeEthics (yes or no go?)Does it have a potential positive impact Challenges & constraints (specifically with regards to equity, transparency, bias, and accountability) Possible tool/ AI approach [NEEDS MORE RESEARCH, NOT RECOMMENDATION]Safety criteria checklistGuardrails and mitigation strategies for your use case
4
Write fundraising proposalAnalyseHallucination, information loss, additional time to check neededFundraisingProposals exhibiting demographic biases and lacking compelling narratives for diverse audiences. Boring copy.OpenAI's GPT-3, Claude.AI- [x] Rigorously validate accuracy and tone. Implement human review before submitting.Implement rigorous human review of AI-written proposals to validate accuracy and tone before submitting.
5
Summarise videos or audioAnalyseBias, data security, data quality, additional workOperationsSummaries distorting original meaning across diverse speakers/contexts.Descript, Otter.ai, Teams transcription- [x] Mitigate bias and privacy risks. Ensure high quality inputs.Do not input private information. Use quality filters for input media (human in the loop). Account for context/nuance.
6
Summarise research findings and reports (analyse patterns in large sets of reports and identify trends)AnalyseBias, data security, data qualityResearchSummaries reflecting demographic biases and inaccuracies from low quality inputs.MonkeyLearn, Aylien, OpenAI GPT-3, Claude.AI, Amazon Comprehend, IBM Watson- [x] Mitigate bias risks. Secure source data. Ensure high quality inputs.Implement bias testing. Follow ethical data practices. Use quality filters for input sources.
7
Summarise meeting notesAnalyseBias, data security, data quality, additional workOperationsSummaries reflecting demographic biases and inaccuracies from low quality inputs.Otter.ai, Microsoft OneNote- [x] Mitigate bias and privacy risks. Ensure high quality inputs.Implement privacy filters. Use quality filters for input media. Account for context/nuance.
8
Summarise impact reportsAnalyseBias, data security, data qualityEvaluationSummaries reflecting demographic biases and inaccuracies from low quality inputs.SummarizeBot, TextRazor- [x] Mitigate bias risks. Secure source data. Ensure high quality inputs.Implement bias testing. Follow ethical data practices. Account for language/cultural nuances.
9
Summarise available funding and resourcesAnalyseBias, data security, data qualityFundraisingSummaries reflecting demographic biases and inaccuracies from low quality inputs.FundingBox, GrantStation, OpenAI GPT-3- [x] Mitigate bias risks. Secure source data. Ensure high quality inputs.Implement bias testing. Follow ethical data practices. Use quality filters for input sources.
10
Strategic plans for program implementation based on needsAnalyseHallucination, information lossProgramsProgram plans neglecting needs of diverse stakeholders across contexts.IBM Watson Studio, DataRobot, OpenAI GPT-3, Claude.AI- [x] Regularly validate plans with human oversight. Ensure accuracy in strategies.Establish processes for domain experts to review and validate program strategies generated by AI.
11
Sentiment analysisAnalyseHallucination, security, overrelianceAllBiases in training data leading to inaccurate sentiment analysis across demographics.VADER Sentiment Analysis, TextBlob, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend, Hugging Face Transformers- [x] Periodically check sentiment analysis for accuracy. Incorporate human insights for better results.Periodically validate sentiment analysis results against human judgments and adjust models accordingly.
12
Semantic search, searchable knowledgeAutomateHallucinations, semantic search issues, securityOperationsSearch results exhibiting demographic biases or lack of inclusivity across contexts.Elasticsearch, Solr, OpenAI GPT-3, Claude.AI, Microsoft Azure Cognitive Search, IBM Watson- [x] Monitor for hallucinations and semantic drift. Validate against authoritative sources.Implement processes to identify and mitigate hallucinations and semantic drift over time. Use reliable knowledge sources.
13
Review computer code to find bugs or mistakesAutomateIP, overreliance, displacement of workTechnologyFailure to identify embedded biases, ethical or fairness issues in code during reviews.CodeClimate, DeepCode, OpenAI GPT-3, Claude.AI, Amazon CodeGuru- [x] Regularly review code for bugs/mistakes. Protect proprietary information.Implement secure code review processes. Hide or avoid using proprietary code in training data.
14
Review candidates fit with job descriptionAutomateBias, data securityHuman resourcesBiased, non-equitable hiring practices. Candidate data privacy violations.HireVue, Traitify, Textio, Jobscan, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend- [x] Mitigate bias risks. Secure candidate personal data.Implement proactive bias testing. Follow best practices for ethical candidate screening and data privacy.
15
Responding to audit requestsAutomateHallucinationsOperationsPotential lack of accountability and inaccuracies in audit responses.IBM Watson Discovery, Kira Systems, OpenAI GPT-3, Claude.AI- [x] Validate responses against authoritative sources. Maintain auditable logs.Validate AI-generated audit responses against authoritative sources before finalising. Log all AI-generated content for auditing.
16
Rapid research synthesis Detecthallucinations, security, semanticResearchResearch syntheses exhibiting demographic biases or omitting key perspectives.Iris.ai, Covidence, OpenAI GPT-3, Claude.AI- [x] Validate against authoritative sources. Check for hallucinations and omissions.Validate synthesised research against authoritative sources. Identify potential hallucinations or omissions.
17
Proposal notesDetectData leakage, additional workFundraisingOpaque use of insights from proposal notes raising accountability concerns.Scrivener, Bear, OpenAI GPT-3, Claude.AI- [x] Safeguard proposal notes to prevent unauthorised access. Regularly update security measures.Restrict access to proposal notes. Stay updated on evolving security best practices and vulnerabilities.
18
Project evaluationGenerateHallucination, security, overrelianceEvaluationPotential biases in evaluation processes and decision-making rationale lacking transparency.Evalato, SurveyMonkey- [x] Regularly validate project evaluations. Include reasons behind decisions for transparency.Analyse data and "score" using different weights to determine confidence levels for retaining the program.
19
Personalising the service delivery experience with assistantsGenerateReputational, security, ethics, bias and toxic languageProgramsServices exhibiting biases, toxic traits or privacy violations impacting some user groups.Salesforce, Zendesk, OpenAI GPT-3, Claude.AI- [x] Mitigate risks of bias, toxic language, security/privacy violations. Regularly review outputs.Implement proactive monitoring for problematic content. Gather user feedback to identify potential issues.
20
Personalised learning and guidelinesGenerateHallucination, information loss, biasAllBiased, culturally irrelevant or inaccessible learning content for some audiences.Knewton, Smart Sparrow, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language- [x] Regularly assess and update learning content. Incorporate human feedback for personalisation.Implement processes to capture user feedback and update learning materials iteratively with human oversight.
21
Organise notesGenerateSecurity, hallucinationsOperationsIllogical organisation. Potential leakage of personal/sensitive information in notes.Evernote, Notion, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language- [x] Validate organisation of notes. Secure notes containing sensitive information.Use AI to generate note organisation suggestions, with human verification and refinement of groupings and themes. Secure notes with personal/sensitive data.
22
Note taking (i.e. for video calls)GeneratePrivacy, additional workOperationsPotential privacy violations through sharing of personal meeting information.Otter.ai, Microsoft OneNote, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language- [x] Remind users not to share personal information. Enable automated tools to censor sensitive data.Display disclaimers about sharing personal info. Use redaction AI to identify and remove sensitive data automatically in controlled models.
23
Modelling of climate events, economic crash or inflation, forced migration, conflict, etc.GenerateHallucination, information loss, semantic search issuesProgramsModels neglecting impacts on diverse stakeholder groups or lacking transparency.ClimateNet, Simudyne, TensorFlow, PyTorch, Scikit-learn- [x] Regularly update models with latest data. Use diverse datasets for accurate predictions.Establish data pipelines to incorporate new information sources. Expand demographic/geographic data coverage.
24
Mis and disinformation detectionGenerateHallucination, semantic searchTechnologyDemographic, political or cultural biases in defining and detecting misinformation. Unclear definitions.Factmata, Open Sources, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend- [x] Regularly update filters and blocks for misinformation. Incorporate user feedback for improvements.Establish processes to continuously update misinformation detection models based on new data sources and user feedback.
25
Matching Sustainable Development Goals to outcomesGenerateHallucinationsGrantsCultural biases leading to mismatches between outcomes and SDGs across contexts.SDG Compass, SDG Tracker, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language- [x] Validate matches for accuracy. Use multilingual training data.Expand training data to include inputs from non-English languages. Implement human validation of matches.
26
Make documents plain EnglishGenerateSemantic issuesAllPlain language materials inaccessible or lacking resonance for some audiences.Grammarly, Hemingway Editor, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language- [x] Preserve critical meaning. Validate simplified language. Implement human review.Use AI suggestions as a starting point, with human editing to simplify language while preserving critical meaning.
27
Layout designs for visual dataGenerateHallucination, information loss, overreliance, cabon compared to other ways to address thisDataMisleading, inaccurate or biased visual data representations.Canva, Adobe Spark, TensorFlow, PyTorch, Scikit-learn- [x] Validate layout accuracy. Mitigate risks of bias or misleading representations.Prefer human designers over AI for critical visual data representations to avoid potential biases or misleading outputs. Implement bias testing.
28
Labeling unlabeled dataGenerateData leakage, Worker exploitationDataUnfair compensation and representation gaps for labelers across demographics/geographies.Labelbox, Prodigy, Google Cloud AI Platform, Amazon SageMaker- [x] Ensure fair compensation and ethical working standards for data labeling. Regularly review working conditions.Work with AI that uses reputable data labeling partners that treat workers fairly. Audit their practices periodically.
29
Job descriptionsGenerateData leakage, biasHuman resourcesDemographic biases and lack of inclusive language in job descriptions.Textio, Jobscan, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend- [x] Secure job descriptions to prevent unauthorised access. Regularly review and update privacy measures.Restrict access to job description files. Audit handling of candidate personal data periodically.
30
Internal assistantsGenerateHallucinations, overcorrection, bias and toxic languageProgramsPotential hallucinations, biases, toxic outputs and data security risks from assistants.ChatGPT, Microsoft Teams, OpenAI GPT-3, Claude.AI- [x] Mitigate hallucinations, bias, toxic language. Monitor outputs closely with human oversight.Establish clear guidelines and filtering. Maintain tight control over sensitive information access.
31
Internal assistant on WildnetGenerateHallucinations, overcorrection, bias and toxic languageCommunicationsPotential hallucinations, biases, toxic outputs and data security risks from assistants.Custom AI integration based on specific needs and requirements (LLAMA2)- [x] Mitigate hallucinations, bias, toxic language. Monitor outputs closely.Define strict guidelines and filters. Tightly control access to sensitive data.
32
Insights and advice in specific situations relating to mitigation activitiesGenerateHallucination, information lossAllInsights exhibiting demographic biases, lack of transparency on assumptions/limitations.IBM Watson, OpenAI GPT-3, Claude.AI- [x] Continuously check insights for accuracy. Incorporate human feedback to avoid misinformation.Validate insights against authoritative sources. Gather user feedback to identify potential misinformation.
33
Impact & performanceGenerateData leakage, Worker exploitationHuman resourcesLack of transparency around use of personal data. Privacy violations.ImpactMapper, Salesforce- [x] Obtain consent for data collection. Ensure secure storage of impact and performance data.Implement strict access controls and encryption for storing and sharing impact and performance data.
34
Image taggingGenerateSemantic search issues, securityDataUnderrepresentation or biases in training data leading to inaccurate tagging across demographics.Amazon Rekognition, Clarifai, Google Cloud Vision AI, TensorFlow- [x] Validate tags for accuracy and potential biases. Implement human review process.Implement human review to validate tags and identify potential semantic issues. Use diverse training data to reduce biases. Ensure feedback loops.
35
Identify and organise themes within qualitative dataGenerateSecurity, hallucinationsEvaluationOmission of key perspectives across diverse contexts in identified themes.NVivo, Dedoose, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend- [x] Validate identified themes. Secure source data containing personal information.Validate AI-identified themes against human judgment. Provide clear security controls for any sensitive qualitative data.
36
Generating data taxonomiesGenerateData leakage, semantic issues, creation of additional workDataTaxonomies that don't reflect diversity of contexts and perspectives. Opaque updates.PoolParty, Smartlogic, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language- [x] Protect data taxonomies to prevent unauthorised access. Regularly review and update access controls.Implement role-based access controls for data taxonomies. Regularly review logs to monitor access.
37
Fraud detectionGenerateData leakage, Model exploitation, prompt injection (aka Jailbreak)TechnologyPotential demographic biases in fraud detection. Privacy violations through use of personal data.Featurespace, SAS Fraud Analytics, TensorFlow, PyTorch, Scikit-learn, Amazon Fraud Detector- [x] Implement automated flags for personal information. Align with country-specific privacy laws.Use anonymisation and redaction techniques to protect personal data. Consult legal experts on privacy requirements.
38
Formatting documentsGenerateOverrelliance, carbonCommunicationsFormatting issues impacting accessibility and inclusiveness for all users.Microsoft Word, Google Docs, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language- [x] Implement human review for important documents. Check for proper formatting.Use AI formatting assistance as a drafting aid, with human reviewers finalising important documentation.
39
Find errors or trends within documentsManageHallucination, information lossAllError analyses reflecting demographic biases or context gaps across groups.Grammarly, ProWritingAid, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language- [x] Implement error ID tools. Regularly review and update content for accuracy.Deploy AI document QA tools. Establish human editorial processes.
40
Edit images & videosManageIP, ethics, hallucinations, carbonCommunicationsPotential intellectual property infringement. Unethical visual representations of demographics.Adobe Creative Cloud, Lumen5, Google Cloud Vision AI, Amazon Rekognition- [x] Prevent infringement of copyrights. Validate for ethical concerns. Implement human review.Implement robust controls to prevent infringement of copyrighted visual content. Validate edits for potential ethical concerns with human reviewers.
41
Due diligence on potential corporate partnersManageHallucinations, semantic search issues CorporatesLack of impartiality, transparency or comprehensiveness in due diligence processes.DiligenceVault, Intelligize, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend- [x] Mitigate hallucination risks. Cross-validate summarised info.Verify summarised info against authoritative sources. Identify potential fabrications or omissions.
42
Draft program updates and delivery plansManageHallucination, information lossProgramsPlans and updates that lack representation of diverse contexts and user needs.Notion, Trello, OpenAI GPT-3, Claude.AI- [x] Regularly check accuracy, involve humans for important decisions, and learn from user feedback. Ensure diverse datasets to avoid biases.Implement human-in-the-loop review. Expand training data sources to reduce demographic and geographic biases.
43
Draft donor reportsManageData leakageGrantsPotential donor privacy violations. Reports inaccessible to diverse audiences.Donorbox, Kindful, OpenAI GPT-3, Claude.AI- [x] Safeguard confidential information. Obtain explicit consent for data collection. Check for language nuances in reports.Implement access controls and encryption for confidential donor data. Consult donor communications experts during report drafting.
44
Design presentations, storyboards, etc.Predict Hallucination, information loss, overreliance, cabon compared to other ways to address thisCommunicationsInaccessible or inappropriate presentation designs for diverse audiences.Canva, Prezi, OpenAI GPT-3, Claude.AI- [x] Validate outputs for accuracy, biases, and appropriateness. Involve human designers.Involve human designers to review AI-generated designs and provide creative direction. Implement bias testing.
45
Database wrangling, cleaning, and quality checksReviewData leakage, Intellectual infringement, Transparency, creation of additional workDataLack of transparency around data use. Inequitable training data collection.Trifacta, OpenRefine, Scikit-learn, TensorFlow- [x] Avoid using production data for future training. Use a hosted LLM for added security.Maintain strict separation between production and non-production data environments. Only use cleaned data for model training.
46
Creating templatesReviewOverrelliance, carbonOperationsDemographic biases embedded in original templates before customisation.TemplateMonster, Canva, OpenAI GPT-3, Claude.AI- [x] Treat templates as starting points. Involve human review and customisation.Minimise overreliance on AI. Use templates as starting points for human editing rather than final outputs.
47
Creating talking pointsReviewOverrelliance, carbon, hallucinations, biasCommunicationsPotential biases, misinformation or offensive content in talking points.Respona, Speechpad, OpenAI GPT-3, Claude.AI- [x] Vet for potential biases, misinformation, and inappropriate content.Vet AI-generated talking points for potential biases, hallucinations, and sensitive content before using externally as part of test process.
48
Creating checklists or task listsReviewOverrelliance, carbonOperationsPotential blind spots or biases in auto-generated checklists across contexts.Todoist, Trello, OpenAI GPT-3, Claude.AI- [x] Validate outputs. Treat as drafts requiring human review before operational use.Treat AI-generated checklists as drafts to validate and augment with human oversight before operational use.
49
Create visual communication materials SummariseData leakage, ethics, semantic search, hallucinationsCommunicationsVisuals lacking representation or containing offensive/insensitive content for some groups.Dall-E, Midjourney Adobe Creative Cloud, Canva, OpenAI GPT-3, Claude.AI, TensorFlow- [x] Protect visuals from unauthorised access. Implement encryption for secure sharing.Apply digital rights management controls. Use end-to-end encryption when sharing visuals externally.
50
Create fundraising marketing documentsSummariseData leakageFundraisingLack of inclusive language/visuals. Non-transparent use of data in fundraising.Mailchimp, HubSpot, OpenAI GPT-3, Claude.AI- [x] Protect sensitive information with strong security measures. Only share information with proper consent. Regularly review data sharing protocols.Redact and encrypt sensitive data. Obtain explicit consent before sharing any personal or proprietary information. Audit data sharing periodically.
51
Create communication materials (docs, image, videos, social media content, etc.)SummariseHallucination, information lossCommunicationsContent exhibiting demographic biases or lacking resonance with diverse audiences.Dall -E, Midjourney, Canva, Slack, OpenAI GPT-3, Claude.AI- [x] Regularly verify accuracy. Use clear, unbiased language. Monitor user feedback.Establish editorial review processes. Analyse materials for potential biases. Incorporate audience feedback iteratively.
52
Contract draftsSummariseData leakage, PrivacyLegalAmbiguous, unfair or inequitable contract terms across stakeholders.DocuSign, PandaDoc, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language- [x] Ensure contracts protect privacy. Obtain clear consent. Regularly review and update templates.Consult legal experts. Implement robust consent management processes. Periodically audit contract language.
53
Computer codeSummariseIntellectual infringement, IP, hallucinations, semantic search, carbon (if code less effective)TechnologyEmbedded biases, fairness issues or vulnerabilities in code lacking auditability.Visual Studio Code, PyCharm, OpenAI GPT-3, Claude.AI, Amazon CodeGuru- [x] Clearly define data flows and secure systems. Prevent unauthorised internal access. Regularly update security.Implement secure coding practices, access controls and monitoring. Perform regular pen-testing and security audits.
54
Metadata documentsSummariseData leakageDataLack of transparency around updates to metadata. Inequitable data access.DataWrangler, Jupyter Notebooks, OpenAI GPT-3, Claude.AI- [x] Protect codebooks and metadata with strong security measures. Regularly review access protocols.Use encryption, access controls, and audit logs to secure codebooks and metadata. Regularly review who has access.
55
Classify informationSummariseSemantic search, additional workAllClassification errors and embedded biases violating equity and accountability.MonkeyLearn, Aylien, Google Cloud Natural Language, Amazon Comprehend- [x] Validate classifications, especially for high-stakes decisions. Secure sensitive information.Involve human oversight to validate AI-driven information classification, especially for critical decisions. Secure access to sensitive information.
56
Automatic report taggingSummariseSemantic search issues, securityGrantsInequitable tagging that doesn't align with established standards and transparency.Zoho Analytics, Tableau, Google Cloud Natural Language, Amazon Comprehend- [x] Validate tagging accuracy. Prefer proven frameworks over experimental AI approaches.Use traditional proven frameworks rather than leading-edge AI for critical report tagging. Implement human validation.
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100