Key ethical issues dealt with
Collection and protection of biometric data.
- Speech/video data is biometric data which is classed as sensitive under both Swiss and EU law.
- Biometric data needs a higher level of data protection than ordinary data.
- We only collect biometric data on the explicit consent of data-subjects, and it is stored in safe, secure, encrypted, and password-protected infrastructures.
Misuse of technologies.
- Technologies that are used for good can sometimes be used differently in bad ways.
- It is difficult to control how technologies are treated by end-users
- We will follow export-control requirements, not exploit technologies to democratic countries, countries experiencing conflict, or countries with poor human-rights records.
Bias
- Bias in ML technologies arises where training data is skewed for/against a particular group.
- This can lead to discriminatory outcomes for marginalized groups.
- We are evaluating training data, development results, and user-feedback for bias issues and making alterations to mitigate the impact of bias as much as practically possible.
Transparency/ explainability
- Where AI technologies are very complex, it is difficult to understand why they reached a particular result
- Not being able to understand how ROXANNE reaches results could have negative impacts on how criminal investigations progress, or on how they are explained in court
- We are ensuring that ROXANNE algorithms are understandable by end-users
Human-centric decision-making
- Advanced technologies can cause automation bias (where people trust machines more than themselves)
- In law enforcement, this can mean a loss of critical thinking and police intuition
- We have developed a decision-support tool where end-users can demonstrate they have considered important legal, ethical, and societal issues before taking a decision.