Investor Statement Sign-on: Support of Robust EU Artificial Intelligence (AI) Regulation

Please sign on to this investor statement (full text below) regarding the European Commission’s proposed legislation governing artificial intelligence (AI). 

As AI continues to find its way into our daily lives, as with all technological innovation, it brings the potential to advance and harm society. The United Nations' human rights chief in 2021 called on member states to put a moratorium on the sale and use of artificial intelligence systems until the "negative, even catastrophic" risks they pose can be addressed.

Companies need to respect human rights throughout their operations and value chains as outlined in the UN Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises. 

Investors recognize the need for rights-respecting regulation to enable responsible business conduct, and while we welcome the European Commission’s proposed Artificial Intelligence Act (AI Act), we are also calling for the EU regulatory bodies to consider and include changes to the proposed regulation to ensure the rights of all members of society are protected and does not limit civic freedoms and democratic processes.


Sign in to Google to save your progress. Learn more

INVESTOR STATEMENT IN SUPPORT OF DIGITAL RIGHTS REGULATIONS
- EUROPEAN UNION ARTIFICIAL INTELLIGENCE ACT

Companies need to respect human rights throughout their operations and value chains as outlined in the UN Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises.  Investors recognize the need for rights-respecting regulation to enable responsible business conduct and welcome the European Commission’s proposal for a regulatory framework for artificial intelligence. The proposed Artificial Intelligence Act (AI Act) is intended to regulate the development and use of AI systems and aims to promote the uptake of ‘trustworthy AI’ whilst protecting the rights of people affected by AI systems.

While the development and use of AI has advanced and continues to have great potential to advance, human rights and sustainable development, AI, and in particular the lack of transparency in AI systems, can cause and contribute towards actual and potential harms such as invasion of privacy and discrimination. The lack of trustworthy AI, can result in AI providers and AI users facing reputational, financial and business risks and losses, and society at large faces significant risk. Investors want to be able to make rights-respecting investment decisions in companies that responsibly design, provide, deploy and/or use AI systems, within their business operations and value-chain relationships. Investors support proposed regulation like the AI Act, which is poised to incentivize and enable responsible development and use of AI that empowers users, communities, and society, rather than dividing and discriminating against them.

We, the undersigned XX investors representing over US$XX (Euro €YY) in assets under management and advisement, call on the European Parliament, the European Commission, the Council of the European Union, and EU Member States to ensure the AI Act protects the rights of all people and does not limit or jeopardize civic freedoms and democratic processes. We urge the consideration and incorporation of the following recommendations to the AI Act: 

Adopt Meaningful Human Rights Impact Assessment Requirements for Developing & Deploying AI Systems
Human rights impact assessments, as part of human rights due diligence processes, are a critical and widely accepted part of ensuring the responsible design of products and services, and the conduct of rights-respecting business operations and decision-making, that addresses and prevents adverse impacts on relevant stakeholders and rightsholders.  This will enable businesses, in the case of AI providers, to develop and design safer products and services, and in the case of AI users, to prevent and mitigate harms that may occur from the deployment and use of such products and services. Incorporating human rights impact assessment across the business or product and service life cycle will result in more sustainable financial returns and minimize exposure to potential liability. Companies will focus on long-term value creation that benefits all relevant stakeholders of the business, including employees, users, communities, and society. The AI Act should include the following:
Ongoing human rights impact assessments to be undertaken by businesses, both AI providers and AI users, at all stages of the product and service cycle - from design to deployment and end-use taking into account potential contexts for such use or misuse, and resultant unintended harms - to ensure the ongoing protection of and accountability to stakeholders and rights holders in the value chain.
A common methodology for a human rights impact assessment process that has specific criteria relevant to AI systems, to be developed with the involvement of the proposed European Artificial Intelligence Board and the EU Fundamental Rights Agency, including consultation with external stakeholders and rightsholders.
Meaningful engagement with rights holders and civil society including human rights defenders (HRDs), that is sensitive to all groups of society (whether based on gender and gender identity, ethnicity, disability, age, sexual orientation, health, religious practices, etc.) and intersectional, is critical to effectively identifying and responding to actual and potential harmful impacts.
Human rights impact assessments must be made publicly accessible in the proposed EU database for stand-alone high-risk AI systems  within a reasonable time after being conducted and completed by the AI provider and/or AI user, whether in the public or private sector.

Expand the publicly viewable database requirements to AI users to ensure meaningful transparency
The AI Act already proposes an important transparency measure by mandating that providers of high-risk AI systems should register their systems in a publicly viewable database. However, in the original draft of the AI Act, this obligation is limited to AI providers, meaning that the public will only be able to see what high-risk systems are on the market in the EU, but not where they are being used. Following the recommendations of civil society, in order to truly create an ecosystem of trust, the obligation to register in the database should be expanded also to AI users, meaning that entities deploying high-risk AI systems should also register their use of such systems, along with the results of the human rights impact assessment discussed above.

Mandate Stakeholder & Rightsholder Participation
An accessible and effective mechanism for stakeholder engagement in the implementation and enforcement of the AI Act is critical. We support recommendations from civil society and the latest Council of EU’s common position to establish an advisory group of external stakeholders and civil society organizations to the European Artificial Intelligence Board to serve as a ‘bridge’ between the Board and broader civil society and other stakeholders, thereby operationalizing meaningful stakeholder engagement. This advisory group would streamline multi-stakeholder engagement within the Board including allowing for quicker feedback routes to the Board regarding the application and implementation of the AI Act and could assist in the outreach to affected communities, especially marginalized groups.

Prohibitions on AI systems posing Unacceptable Risks
The list of ‘prohibited AI practices’ currently provided in the proposed draft AI Act (Article 5) should be extended to cover all AI systems that pose an unacceptable risk of violating human rights including:
A full prohibition on remote biometric identification (e.g. facial recognition cameras (FRCs)) in publicly accessible spaces to apply to all AI providers and AI users, and not just law enforcement, for both ‘real-time’ live uses (e.g. when FRCs are used in supermarkets or public spaces to monitor for lists of suspects)  and ‘post’ retrospective uses as remote biometric identification can weaponize historical footage against people (e.g. where FRCs’ footage is retroactively analyzed to uncover the identity of a journalist’s source);
The use of ‘predictive policing’, i.e. AI systems used by law enforcement and criminal justice authorities to make predictions, profiles, or risk assessments for the purpose of predicting crimes;
The use of AI-based individual risk assessment and profiling systems in the migration context. This would include predictive analytics and AI polygraphs  for the purpose of prohibiting, curtailing or managing migration;
The use of emotion recognition systems that claim to infer people’s emotions, including the use of AI polygraphs;
The use of biometric categorization systems to track, categorize, and judge people in publicly accessible spaces; or to categorize people based on protected characteristics (for example, ethnic origin, race, disability, sexual orientation) in any circumstances. 

Implement Safeguards for AI systems for National Security Purposes
Rules and safeguards in the AI Act are relevant to and should apply to AI systems that are to be deployed or used for military; defence; and/or national security purposes. Blanket exemptions  from the AI Act for national security must be scrutinized to ensure that national security policy cannot override the rule of law and fundamental rights.  
The use of “security” technology has been known to target protestors (e.g., via biometric recognition); or have a chilling effect on the exercise of people’s rights or result in the silencing of dissenting and opposition voices (e.g., through removal of “terrorist” content on the internet); or where technology designed for the security and military arena has been re-deployed for other public and civil use (e.g., use of surveillance tools to adhere to pandemic rules) without assessing adverse and harmful impact, even if unintended.

Remedy and Accountability
The proposed AI Act should ensure accountability for harms which businesses cause or contribute to and should enable and support the provision of adequate and effective remedy. Depending upon their connection to a harm, businesses should provide for, cooperate in, or use leverage to ensure remediation of adverse impacts of AI systems and products and services in their global value chains and within their operations. The AI Act should include the following:
Include a right to an effective remedy for those whose rights under the AI Act have been infringed as a result of the putting into service of an AI system; and
The creation of a mechanism for individuals and public interest organizations to lodge a complaint with national supervisory authorities for a breach of the AI Act or for AI systems that undermine fundamental rights or the public interest.

Artificial intelligence is a fast-moving domain, and the AI Act must have clear mechanisms and processes to keep pace with technological development. We trust that the legislators will use this unique opportunity to improve the proposed AI Act in order to make it truly meaningful and impactful in respecting and protecting the rights of users and society.





Name of Primary Contact *
First and last names
E-mail of Primary Contact *
Name of Institution/Organization *
Country where institution is based *
Total assets under management and/or advisement in USD (to be cited in aggregate only) *
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy