A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | AI Risk Assessement | |||||||||||||||||||||||||
2 | NOTE: PROTOTYPE/ WORKING DRAFT | |||||||||||||||||||||||||
3 | Task | Type | Biggest Area(s) of risk | Theme | Ethics (yes or no go?) | Does it have a potential positive impact | Challenges & constraints (specifically with regards to equity, transparency, bias, and accountability) | Possible tool/ AI approach [NEEDS MORE RESEARCH, NOT RECOMMENDATION] | Safety criteria checklist | Guardrails and mitigation strategies for your use case | ||||||||||||||||
4 | Write fundraising proposal | Analyse | Hallucination, information loss, additional time to check needed | Fundraising | Proposals exhibiting demographic biases and lacking compelling narratives for diverse audiences. Boring copy. | OpenAI's GPT-3, Claude.AI | - [x] Rigorously validate accuracy and tone. Implement human review before submitting. | Implement rigorous human review of AI-written proposals to validate accuracy and tone before submitting. | ||||||||||||||||||
5 | Summarise videos or audio | Analyse | Bias, data security, data quality, additional work | Operations | Summaries distorting original meaning across diverse speakers/contexts. | Descript, Otter.ai, Teams transcription | - [x] Mitigate bias and privacy risks. Ensure high quality inputs. | Do not input private information. Use quality filters for input media (human in the loop). Account for context/nuance. | ||||||||||||||||||
6 | Summarise research findings and reports (analyse patterns in large sets of reports and identify trends) | Analyse | Bias, data security, data quality | Research | Summaries reflecting demographic biases and inaccuracies from low quality inputs. | MonkeyLearn, Aylien, OpenAI GPT-3, Claude.AI, Amazon Comprehend, IBM Watson | - [x] Mitigate bias risks. Secure source data. Ensure high quality inputs. | Implement bias testing. Follow ethical data practices. Use quality filters for input sources. | ||||||||||||||||||
7 | Summarise meeting notes | Analyse | Bias, data security, data quality, additional work | Operations | Summaries reflecting demographic biases and inaccuracies from low quality inputs. | Otter.ai, Microsoft OneNote | - [x] Mitigate bias and privacy risks. Ensure high quality inputs. | Implement privacy filters. Use quality filters for input media. Account for context/nuance. | ||||||||||||||||||
8 | Summarise impact reports | Analyse | Bias, data security, data quality | Evaluation | Summaries reflecting demographic biases and inaccuracies from low quality inputs. | SummarizeBot, TextRazor | - [x] Mitigate bias risks. Secure source data. Ensure high quality inputs. | Implement bias testing. Follow ethical data practices. Account for language/cultural nuances. | ||||||||||||||||||
9 | Summarise available funding and resources | Analyse | Bias, data security, data quality | Fundraising | Summaries reflecting demographic biases and inaccuracies from low quality inputs. | FundingBox, GrantStation, OpenAI GPT-3 | - [x] Mitigate bias risks. Secure source data. Ensure high quality inputs. | Implement bias testing. Follow ethical data practices. Use quality filters for input sources. | ||||||||||||||||||
10 | Strategic plans for program implementation based on needs | Analyse | Hallucination, information loss | Programs | Program plans neglecting needs of diverse stakeholders across contexts. | IBM Watson Studio, DataRobot, OpenAI GPT-3, Claude.AI | - [x] Regularly validate plans with human oversight. Ensure accuracy in strategies. | Establish processes for domain experts to review and validate program strategies generated by AI. | ||||||||||||||||||
11 | Sentiment analysis | Analyse | Hallucination, security, overreliance | All | Biases in training data leading to inaccurate sentiment analysis across demographics. | VADER Sentiment Analysis, TextBlob, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend, Hugging Face Transformers | - [x] Periodically check sentiment analysis for accuracy. Incorporate human insights for better results. | Periodically validate sentiment analysis results against human judgments and adjust models accordingly. | ||||||||||||||||||
12 | Semantic search, searchable knowledge | Automate | Hallucinations, semantic search issues, security | Operations | Search results exhibiting demographic biases or lack of inclusivity across contexts. | Elasticsearch, Solr, OpenAI GPT-3, Claude.AI, Microsoft Azure Cognitive Search, IBM Watson | - [x] Monitor for hallucinations and semantic drift. Validate against authoritative sources. | Implement processes to identify and mitigate hallucinations and semantic drift over time. Use reliable knowledge sources. | ||||||||||||||||||
13 | Review computer code to find bugs or mistakes | Automate | IP, overreliance, displacement of work | Technology | Failure to identify embedded biases, ethical or fairness issues in code during reviews. | CodeClimate, DeepCode, OpenAI GPT-3, Claude.AI, Amazon CodeGuru | - [x] Regularly review code for bugs/mistakes. Protect proprietary information. | Implement secure code review processes. Hide or avoid using proprietary code in training data. | ||||||||||||||||||
14 | Review candidates fit with job description | Automate | Bias, data security | Human resources | Biased, non-equitable hiring practices. Candidate data privacy violations. | HireVue, Traitify, Textio, Jobscan, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend | - [x] Mitigate bias risks. Secure candidate personal data. | Implement proactive bias testing. Follow best practices for ethical candidate screening and data privacy. | ||||||||||||||||||
15 | Responding to audit requests | Automate | Hallucinations | Operations | Potential lack of accountability and inaccuracies in audit responses. | IBM Watson Discovery, Kira Systems, OpenAI GPT-3, Claude.AI | - [x] Validate responses against authoritative sources. Maintain auditable logs. | Validate AI-generated audit responses against authoritative sources before finalising. Log all AI-generated content for auditing. | ||||||||||||||||||
16 | Rapid research synthesis | Detect | hallucinations, security, semantic | Research | Research syntheses exhibiting demographic biases or omitting key perspectives. | Iris.ai, Covidence, OpenAI GPT-3, Claude.AI | - [x] Validate against authoritative sources. Check for hallucinations and omissions. | Validate synthesised research against authoritative sources. Identify potential hallucinations or omissions. | ||||||||||||||||||
17 | Proposal notes | Detect | Data leakage, additional work | Fundraising | Opaque use of insights from proposal notes raising accountability concerns. | Scrivener, Bear, OpenAI GPT-3, Claude.AI | - [x] Safeguard proposal notes to prevent unauthorised access. Regularly update security measures. | Restrict access to proposal notes. Stay updated on evolving security best practices and vulnerabilities. | ||||||||||||||||||
18 | Project evaluation | Generate | Hallucination, security, overreliance | Evaluation | Potential biases in evaluation processes and decision-making rationale lacking transparency. | Evalato, SurveyMonkey | - [x] Regularly validate project evaluations. Include reasons behind decisions for transparency. | Analyse data and "score" using different weights to determine confidence levels for retaining the program. | ||||||||||||||||||
19 | Personalising the service delivery experience with assistants | Generate | Reputational, security, ethics, bias and toxic language | Programs | Services exhibiting biases, toxic traits or privacy violations impacting some user groups. | Salesforce, Zendesk, OpenAI GPT-3, Claude.AI | - [x] Mitigate risks of bias, toxic language, security/privacy violations. Regularly review outputs. | Implement proactive monitoring for problematic content. Gather user feedback to identify potential issues. | ||||||||||||||||||
20 | Personalised learning and guidelines | Generate | Hallucination, information loss, bias | All | Biased, culturally irrelevant or inaccessible learning content for some audiences. | Knewton, Smart Sparrow, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language | - [x] Regularly assess and update learning content. Incorporate human feedback for personalisation. | Implement processes to capture user feedback and update learning materials iteratively with human oversight. | ||||||||||||||||||
21 | Organise notes | Generate | Security, hallucinations | Operations | Illogical organisation. Potential leakage of personal/sensitive information in notes. | Evernote, Notion, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language | - [x] Validate organisation of notes. Secure notes containing sensitive information. | Use AI to generate note organisation suggestions, with human verification and refinement of groupings and themes. Secure notes with personal/sensitive data. | ||||||||||||||||||
22 | Note taking (i.e. for video calls) | Generate | Privacy, additional work | Operations | Potential privacy violations through sharing of personal meeting information. | Otter.ai, Microsoft OneNote, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language | - [x] Remind users not to share personal information. Enable automated tools to censor sensitive data. | Display disclaimers about sharing personal info. Use redaction AI to identify and remove sensitive data automatically in controlled models. | ||||||||||||||||||
23 | Modelling of climate events, economic crash or inflation, forced migration, conflict, etc. | Generate | Hallucination, information loss, semantic search issues | Programs | Models neglecting impacts on diverse stakeholder groups or lacking transparency. | ClimateNet, Simudyne, TensorFlow, PyTorch, Scikit-learn | - [x] Regularly update models with latest data. Use diverse datasets for accurate predictions. | Establish data pipelines to incorporate new information sources. Expand demographic/geographic data coverage. | ||||||||||||||||||
24 | Mis and disinformation detection | Generate | Hallucination, semantic search | Technology | Demographic, political or cultural biases in defining and detecting misinformation. Unclear definitions. | Factmata, Open Sources, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend | - [x] Regularly update filters and blocks for misinformation. Incorporate user feedback for improvements. | Establish processes to continuously update misinformation detection models based on new data sources and user feedback. | ||||||||||||||||||
25 | Matching Sustainable Development Goals to outcomes | Generate | Hallucinations | Grants | Cultural biases leading to mismatches between outcomes and SDGs across contexts. | SDG Compass, SDG Tracker, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language | - [x] Validate matches for accuracy. Use multilingual training data. | Expand training data to include inputs from non-English languages. Implement human validation of matches. | ||||||||||||||||||
26 | Make documents plain English | Generate | Semantic issues | All | Plain language materials inaccessible or lacking resonance for some audiences. | Grammarly, Hemingway Editor, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language | - [x] Preserve critical meaning. Validate simplified language. Implement human review. | Use AI suggestions as a starting point, with human editing to simplify language while preserving critical meaning. | ||||||||||||||||||
27 | Layout designs for visual data | Generate | Hallucination, information loss, overreliance, cabon compared to other ways to address this | Data | Misleading, inaccurate or biased visual data representations. | Canva, Adobe Spark, TensorFlow, PyTorch, Scikit-learn | - [x] Validate layout accuracy. Mitigate risks of bias or misleading representations. | Prefer human designers over AI for critical visual data representations to avoid potential biases or misleading outputs. Implement bias testing. | ||||||||||||||||||
28 | Labeling unlabeled data | Generate | Data leakage, Worker exploitation | Data | Unfair compensation and representation gaps for labelers across demographics/geographies. | Labelbox, Prodigy, Google Cloud AI Platform, Amazon SageMaker | - [x] Ensure fair compensation and ethical working standards for data labeling. Regularly review working conditions. | Work with AI that uses reputable data labeling partners that treat workers fairly. Audit their practices periodically. | ||||||||||||||||||
29 | Job descriptions | Generate | Data leakage, bias | Human resources | Demographic biases and lack of inclusive language in job descriptions. | Textio, Jobscan, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend | - [x] Secure job descriptions to prevent unauthorised access. Regularly review and update privacy measures. | Restrict access to job description files. Audit handling of candidate personal data periodically. | ||||||||||||||||||
30 | Internal assistants | Generate | Hallucinations, overcorrection, bias and toxic language | Programs | Potential hallucinations, biases, toxic outputs and data security risks from assistants. | ChatGPT, Microsoft Teams, OpenAI GPT-3, Claude.AI | - [x] Mitigate hallucinations, bias, toxic language. Monitor outputs closely with human oversight. | Establish clear guidelines and filtering. Maintain tight control over sensitive information access. | ||||||||||||||||||
31 | Internal assistant on Wildnet | Generate | Hallucinations, overcorrection, bias and toxic language | Communications | Potential hallucinations, biases, toxic outputs and data security risks from assistants. | Custom AI integration based on specific needs and requirements (LLAMA2) | - [x] Mitigate hallucinations, bias, toxic language. Monitor outputs closely. | Define strict guidelines and filters. Tightly control access to sensitive data. | ||||||||||||||||||
32 | Insights and advice in specific situations relating to mitigation activities | Generate | Hallucination, information loss | All | Insights exhibiting demographic biases, lack of transparency on assumptions/limitations. | IBM Watson, OpenAI GPT-3, Claude.AI | - [x] Continuously check insights for accuracy. Incorporate human feedback to avoid misinformation. | Validate insights against authoritative sources. Gather user feedback to identify potential misinformation. | ||||||||||||||||||
33 | Impact & performance | Generate | Data leakage, Worker exploitation | Human resources | Lack of transparency around use of personal data. Privacy violations. | ImpactMapper, Salesforce | - [x] Obtain consent for data collection. Ensure secure storage of impact and performance data. | Implement strict access controls and encryption for storing and sharing impact and performance data. | ||||||||||||||||||
34 | Image tagging | Generate | Semantic search issues, security | Data | Underrepresentation or biases in training data leading to inaccurate tagging across demographics. | Amazon Rekognition, Clarifai, Google Cloud Vision AI, TensorFlow | - [x] Validate tags for accuracy and potential biases. Implement human review process. | Implement human review to validate tags and identify potential semantic issues. Use diverse training data to reduce biases. Ensure feedback loops. | ||||||||||||||||||
35 | Identify and organise themes within qualitative data | Generate | Security, hallucinations | Evaluation | Omission of key perspectives across diverse contexts in identified themes. | NVivo, Dedoose, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend | - [x] Validate identified themes. Secure source data containing personal information. | Validate AI-identified themes against human judgment. Provide clear security controls for any sensitive qualitative data. | ||||||||||||||||||
36 | Generating data taxonomies | Generate | Data leakage, semantic issues, creation of additional work | Data | Taxonomies that don't reflect diversity of contexts and perspectives. Opaque updates. | PoolParty, Smartlogic, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language | - [x] Protect data taxonomies to prevent unauthorised access. Regularly review and update access controls. | Implement role-based access controls for data taxonomies. Regularly review logs to monitor access. | ||||||||||||||||||
37 | Fraud detection | Generate | Data leakage, Model exploitation, prompt injection (aka Jailbreak) | Technology | Potential demographic biases in fraud detection. Privacy violations through use of personal data. | Featurespace, SAS Fraud Analytics, TensorFlow, PyTorch, Scikit-learn, Amazon Fraud Detector | - [x] Implement automated flags for personal information. Align with country-specific privacy laws. | Use anonymisation and redaction techniques to protect personal data. Consult legal experts on privacy requirements. | ||||||||||||||||||
38 | Formatting documents | Generate | Overrelliance, carbon | Communications | Formatting issues impacting accessibility and inclusiveness for all users. | Microsoft Word, Google Docs, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language | - [x] Implement human review for important documents. Check for proper formatting. | Use AI formatting assistance as a drafting aid, with human reviewers finalising important documentation. | ||||||||||||||||||
39 | Find errors or trends within documents | Manage | Hallucination, information loss | All | Error analyses reflecting demographic biases or context gaps across groups. | Grammarly, ProWritingAid, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language | - [x] Implement error ID tools. Regularly review and update content for accuracy. | Deploy AI document QA tools. Establish human editorial processes. | ||||||||||||||||||
40 | Edit images & videos | Manage | IP, ethics, hallucinations, carbon | Communications | Potential intellectual property infringement. Unethical visual representations of demographics. | Adobe Creative Cloud, Lumen5, Google Cloud Vision AI, Amazon Rekognition | - [x] Prevent infringement of copyrights. Validate for ethical concerns. Implement human review. | Implement robust controls to prevent infringement of copyrighted visual content. Validate edits for potential ethical concerns with human reviewers. | ||||||||||||||||||
41 | Due diligence on potential corporate partners | Manage | Hallucinations, semantic search issues | Corporates | Lack of impartiality, transparency or comprehensiveness in due diligence processes. | DiligenceVault, Intelligize, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language, Amazon Comprehend | - [x] Mitigate hallucination risks. Cross-validate summarised info. | Verify summarised info against authoritative sources. Identify potential fabrications or omissions. | ||||||||||||||||||
42 | Draft program updates and delivery plans | Manage | Hallucination, information loss | Programs | Plans and updates that lack representation of diverse contexts and user needs. | Notion, Trello, OpenAI GPT-3, Claude.AI | - [x] Regularly check accuracy, involve humans for important decisions, and learn from user feedback. Ensure diverse datasets to avoid biases. | Implement human-in-the-loop review. Expand training data sources to reduce demographic and geographic biases. | ||||||||||||||||||
43 | Draft donor reports | Manage | Data leakage | Grants | Potential donor privacy violations. Reports inaccessible to diverse audiences. | Donorbox, Kindful, OpenAI GPT-3, Claude.AI | - [x] Safeguard confidential information. Obtain explicit consent for data collection. Check for language nuances in reports. | Implement access controls and encryption for confidential donor data. Consult donor communications experts during report drafting. | ||||||||||||||||||
44 | Design presentations, storyboards, etc. | Predict | Hallucination, information loss, overreliance, cabon compared to other ways to address this | Communications | Inaccessible or inappropriate presentation designs for diverse audiences. | Canva, Prezi, OpenAI GPT-3, Claude.AI | - [x] Validate outputs for accuracy, biases, and appropriateness. Involve human designers. | Involve human designers to review AI-generated designs and provide creative direction. Implement bias testing. | ||||||||||||||||||
45 | Database wrangling, cleaning, and quality checks | Review | Data leakage, Intellectual infringement, Transparency, creation of additional work | Data | Lack of transparency around data use. Inequitable training data collection. | Trifacta, OpenRefine, Scikit-learn, TensorFlow | - [x] Avoid using production data for future training. Use a hosted LLM for added security. | Maintain strict separation between production and non-production data environments. Only use cleaned data for model training. | ||||||||||||||||||
46 | Creating templates | Review | Overrelliance, carbon | Operations | Demographic biases embedded in original templates before customisation. | TemplateMonster, Canva, OpenAI GPT-3, Claude.AI | - [x] Treat templates as starting points. Involve human review and customisation. | Minimise overreliance on AI. Use templates as starting points for human editing rather than final outputs. | ||||||||||||||||||
47 | Creating talking points | Review | Overrelliance, carbon, hallucinations, bias | Communications | Potential biases, misinformation or offensive content in talking points. | Respona, Speechpad, OpenAI GPT-3, Claude.AI | - [x] Vet for potential biases, misinformation, and inappropriate content. | Vet AI-generated talking points for potential biases, hallucinations, and sensitive content before using externally as part of test process. | ||||||||||||||||||
48 | Creating checklists or task lists | Review | Overrelliance, carbon | Operations | Potential blind spots or biases in auto-generated checklists across contexts. | Todoist, Trello, OpenAI GPT-3, Claude.AI | - [x] Validate outputs. Treat as drafts requiring human review before operational use. | Treat AI-generated checklists as drafts to validate and augment with human oversight before operational use. | ||||||||||||||||||
49 | Create visual communication materials | Summarise | Data leakage, ethics, semantic search, hallucinations | Communications | Visuals lacking representation or containing offensive/insensitive content for some groups. | Dall-E, Midjourney Adobe Creative Cloud, Canva, OpenAI GPT-3, Claude.AI, TensorFlow | - [x] Protect visuals from unauthorised access. Implement encryption for secure sharing. | Apply digital rights management controls. Use end-to-end encryption when sharing visuals externally. | ||||||||||||||||||
50 | Create fundraising marketing documents | Summarise | Data leakage | Fundraising | Lack of inclusive language/visuals. Non-transparent use of data in fundraising. | Mailchimp, HubSpot, OpenAI GPT-3, Claude.AI | - [x] Protect sensitive information with strong security measures. Only share information with proper consent. Regularly review data sharing protocols. | Redact and encrypt sensitive data. Obtain explicit consent before sharing any personal or proprietary information. Audit data sharing periodically. | ||||||||||||||||||
51 | Create communication materials (docs, image, videos, social media content, etc.) | Summarise | Hallucination, information loss | Communications | Content exhibiting demographic biases or lacking resonance with diverse audiences. | Dall -E, Midjourney, Canva, Slack, OpenAI GPT-3, Claude.AI | - [x] Regularly verify accuracy. Use clear, unbiased language. Monitor user feedback. | Establish editorial review processes. Analyse materials for potential biases. Incorporate audience feedback iteratively. | ||||||||||||||||||
52 | Contract drafts | Summarise | Data leakage, Privacy | Legal | Ambiguous, unfair or inequitable contract terms across stakeholders. | DocuSign, PandaDoc, OpenAI GPT-3, Claude.AI, Google Cloud Natural Language | - [x] Ensure contracts protect privacy. Obtain clear consent. Regularly review and update templates. | Consult legal experts. Implement robust consent management processes. Periodically audit contract language. | ||||||||||||||||||
53 | Computer code | Summarise | Intellectual infringement, IP, hallucinations, semantic search, carbon (if code less effective) | Technology | Embedded biases, fairness issues or vulnerabilities in code lacking auditability. | Visual Studio Code, PyCharm, OpenAI GPT-3, Claude.AI, Amazon CodeGuru | - [x] Clearly define data flows and secure systems. Prevent unauthorised internal access. Regularly update security. | Implement secure coding practices, access controls and monitoring. Perform regular pen-testing and security audits. | ||||||||||||||||||
54 | Metadata documents | Summarise | Data leakage | Data | Lack of transparency around updates to metadata. Inequitable data access. | DataWrangler, Jupyter Notebooks, OpenAI GPT-3, Claude.AI | - [x] Protect codebooks and metadata with strong security measures. Regularly review access protocols. | Use encryption, access controls, and audit logs to secure codebooks and metadata. Regularly review who has access. | ||||||||||||||||||
55 | Classify information | Summarise | Semantic search, additional work | All | Classification errors and embedded biases violating equity and accountability. | MonkeyLearn, Aylien, Google Cloud Natural Language, Amazon Comprehend | - [x] Validate classifications, especially for high-stakes decisions. Secure sensitive information. | Involve human oversight to validate AI-driven information classification, especially for critical decisions. Secure access to sensitive information. | ||||||||||||||||||
56 | Automatic report tagging | Summarise | Semantic search issues, security | Grants | Inequitable tagging that doesn't align with established standards and transparency. | Zoho Analytics, Tableau, Google Cloud Natural Language, Amazon Comprehend | - [x] Validate tagging accuracy. Prefer proven frameworks over experimental AI approaches. | Use traditional proven frameworks rather than leading-edge AI for critical report tagging. Implement human validation. | ||||||||||||||||||
57 | ||||||||||||||||||||||||||
58 | ||||||||||||||||||||||||||
59 | ||||||||||||||||||||||||||
60 | ||||||||||||||||||||||||||
61 | ||||||||||||||||||||||||||
62 | ||||||||||||||||||||||||||
63 | ||||||||||||||||||||||||||
64 | ||||||||||||||||||||||||||
65 | ||||||||||||||||||||||||||
66 | ||||||||||||||||||||||||||
67 | ||||||||||||||||||||||||||
68 | ||||||||||||||||||||||||||
69 | ||||||||||||||||||||||||||
70 | ||||||||||||||||||||||||||
71 | ||||||||||||||||||||||||||
72 | ||||||||||||||||||||||||||
73 | ||||||||||||||||||||||||||
74 | ||||||||||||||||||||||||||
75 | ||||||||||||||||||||||||||
76 | ||||||||||||||||||||||||||
77 | ||||||||||||||||||||||||||
78 | ||||||||||||||||||||||||||
79 | ||||||||||||||||||||||||||
80 | ||||||||||||||||||||||||||
81 | ||||||||||||||||||||||||||
82 | ||||||||||||||||||||||||||
83 | ||||||||||||||||||||||||||
84 | ||||||||||||||||||||||||||
85 | ||||||||||||||||||||||||||
86 | ||||||||||||||||||||||||||
87 | ||||||||||||||||||||||||||
88 | ||||||||||||||||||||||||||
89 | ||||||||||||||||||||||||||
90 | ||||||||||||||||||||||||||
91 | ||||||||||||||||||||||||||
92 | ||||||||||||||||||||||||||
93 | ||||||||||||||||||||||||||
94 | ||||||||||||||||||||||||||
95 | ||||||||||||||||||||||||||
96 | ||||||||||||||||||||||||||
97 | ||||||||||||||||||||||||||
98 | ||||||||||||||||||||||||||
99 | ||||||||||||||||||||||||||
100 |