Published using Google Docs
Research Paper: AI ethics and risk of catastrophes
Updated automatically every 5 minutes

AI ethics and risk of catastrophes

By,

Veer Dosi

Research Scholar

Bhopal, MP, India

Abstract

It is imperative to research AI ethics and to prepare for better building safer and more ethical AI. Ai ethics is a series of rules that maintain the discernment of right and wrong by an AI. Since data lies at the core of AI, it is imperative for us to structure algorithms in the right manner as artificial intelligence has the power to take biases and moralities to another extent so it is essential that we maintain I well within a moral framework.

Introduction

Artificial Intelligence (AI) ethics is a field that has come into the foreground as a result of increasing concern about the safety and risks associated with AI. It can currently be taken as a nascent field but as research improves into this and as a growing part of the larger field that is digital ethics as a whole takes into account concerns related to big data, data harvesting, and newer technologies like blockchain.

The principal aim of this research is to provide a high-level overview of the problem related to AI ethics and consequently not only the effects but also the existential risk that can come to be as a result of that. The first section introduces the extent of the issue and the level of priority that it should be granted apart from the fact as to what should we be concerned about. The second section talks about The Case of the problem and the possible repercussions that might follow. And section 3 talks about some possible solutions to this and some possible criticisms of the paper itself. We conclude by explaining more generally of AI-based policies that might need to be designed to tackle this effectively

There are increasing numbers of high-profile cases of harm that have resulted either because of the misuse of technology (e.g.,  facial recognition surveillance, nonconsensual mass data collection, psychometric voter manipulation, etc.), or because of the technology has inherent flaws in its design or the course of its training (e.g., bias in cases of recidivism, and medical misdiagnosis, etc.).

As more direct research is being done into this field (2018 onwards (eg. https://80000hours.org/problem-profiles/artificial-intelligence/ ))we are seeing that though AI may be able to provide a lot of benefits it is also and by increasing the risks of using it to develop dangerous new technology, could be worsening geopolitical conflicts and could empower totalitarian regimes from staying in power. This can eventually lead to greater risks like annihilation-level events or catastrophes in general that might affect a large percentage of the population.

Drawing on a conception of ethics that encompasses broader social and political themes, we read digital ethics as covering the psychological, social (including environmental), and political impact of emerging digital technologies. The psychological refers to the likes of agency (moral self-determination), cognitive shifts, and selfhood; where the social refers to identity, belonging,  and communities, as well as environmental issues; and, where the political refers to legal/jurisdictional, democratic (including accountability), and the economic realm.

Thus it is imperative to invest highly into researching more into this field especially the point on associated ethics as this might be one of the most pressing problems that humans face in the coming years due to some reasons which I would like to elaborate upon in section 1.

Section 1:

What has happened and what should we be concerned about?

There are many ethical risks associated with artificial intelligence (AI). One of the most significant risks is that AI could be used to harm or exploit humans. For example, if AI systems are used to control self-driving cars, there is a risk that the vehicles could be programmed to deliberately cause accidents. Another risk is that it could be used to create “super intelligent” machines that could eventually out-compete and enslave humans. Such as it general AI systems are used to monitor people’s online activity, they could be used to track people’s movements and activities, and even control what they see and hear.

One of the most significant risks is that AI could exacerbate inequality. For example, these systems are used to automate jobs, they could lead to large-scale unemployment and increased inequality. Additionally, they could be used to manipulate public opinion and interfere in elections. For example, AI systems could be used to create fake news stories or to target ads for specific groups of people.

As artificial intelligence (AI) becomes more advanced, there are increasing concerns about the potential risks associated with its development and implementation. These risks can be divided into three categories: technical risks, economic risks, and social risks. Technical risks are associated with the development and implementation of AI technology itself. For example, there is a risk that AI technology could be used to create powerful new weapons that could be used to harm humans or even destroy entire cities. There is also a risk that AI technology could be used to create powerful new forms of cyber-security that could be used to attack and disable critical infrastructure. Economic risks are associated with the way that AI technology can be used to automate tasks and processes. For example, there is a risk that AI technology could be used to automate jobs, leading to large-scale unemployment. There is also a risk that AI technology could be used to manipulate financial markets, leading to economic instability. Social risks are associated with the way that AI technology can be used to interact with humans. For example, there is a risk that AI technology could be used to create powerful new forms of advertising that could be used to manipulate people’s emotions. There is also a risk that AI technology could be used to create powerful new forms of social media that could be used to spread false information and hatred. The best way to prevent an AI-related catastrophe is to research and understand the risks associated with AI technology. It is also important to develop policies and regulations that can mitigate these risks. For example, policies could be developed that restrict the use of AI technology in certain ways, such as banning the use of AI technology to create weapons. Regulations could also be put in place that would require companies to disclose how they are using AI technology.

The associated risks with making quick advances in artificial intelligence (AI) technology are mainly related to the potential for unforeseen consequences resulting from the increasingly complex and autonomous decision-making capabilities of such setups. As this technology continues to develop, these risks are likely to become more pronounced and difficult to manage. One of the most significant risks associated with AI is the potential for unforeseen and dangerous consequences resulting from the increasingly complex and autonomous decision-making capabilities of AI systems. As AI technology continues to develop, these risks are likely to become more pronounced and difficult to manage. Another significant risk is the potential for ai technology to be used for malicious purposes, such as creating false information or carrying out cyber attacks. If ai systems are not properly secured, they could be exploited by criminals or terrorists. Finally, the rapid pace of ai development could result in a "singularity" event, in which artificial intelligence surpasses human intelligence, leading to unforeseen and potentially catastrophic consequences. If this occurs, it will be difficult to manage and prevent.

There is a lack of clear understanding of the difference between artificial intelligence (ai) and machine learning (ml), and the potential consequences of their combined use. In general, ai is a computer program that can be designed to solve problems that would be difficult or impossible for a human to solve. However, ai is not without potential risks, and the development and use of ai could lead to disasters if not managed carefully. One of the most serious potential dangers of ai is that it could be used to create machines that can independently create further ai. This could lead to ai-based disasters that are difficult or impossible to prevent or stop. For example, if ai is used to create self-replicating robots, these robots could create further copies of themselves, leading to a runaway AI arms race. Another potential danger of ai is that it could be used to create machines that can harm humans. For example, ai could be used to create robots that can kill people, or robots that can manipulate people to do their bidding. If this kind of ai were to become widespread, it could create significant safety and security risks. It is important to note that ai is still in its early stages of development, and there is still much to be learned about its potential risks. However, it is important to be aware of these dangers, so that we can manage them responsibly.

Who is affected and how are they affected?

There have been major rapid advances in ai from 2019- onwards and this has exacerbated the risk to an entirely different extent. Modern AI technologies involve the use of that improves on their own as the data that is input in them. ML systems right now can perform only a fraction of tasks that humans can do and also are highly specialized as well as good at what they can do like playing one particular game or like generating some particular text.

There are many examples such as:

All of this has resulted in many effects such as there have been biases when asked to generate a particular image. Other examples include not being able to exactly detect why that bias has resulted because the data sources are fairly large and the machine learning systems are too complex, to begin with. Developing anything new involves not just making sure the system is workable but also poses new ethical challenges.

It will also become increasingly important that AI algorithms also be robust and vigilant against manipulation. AI should be trained on diverse datasets that take into consideration all the ways that humans might try to play with the program itself.

Other effects are that AI has worsened inequality and made some people more vulnerable and others too strong. For eg big data companies that harvest user data for ads and research hold considerable power while the user itself is not as safe when talking from user data security standpoint.

One example of non-ethical artificial intelligence is a chatbot that pretends to be a human but is programmed to spam users with ads. Another example is a surveillance AI that can track people's movements without their consent.

But one of the major risks posed by AI is that of an existential crisis, which right now we have poor evidence of but as pointed out by many may be a true risk that we may need to be concerned about at some point to ensure a future. This may sound too dark given the stage we are at in terms of the advancement of ai, but with the rise of artificial general intelligence, associated policy, ethics and activism might be something we need to think about in the near future.

Aloof these are only a part of the ethical issues that need to be kept in mind not only while building the application itself but also kept in check with effective strategies which we will elaborate on in Section 3

Section 2:

Repercussions:

A growing number of AI experts think that there is a considerable probability of AI outcomes leading to severe consequences ie on the scale of mass extinction. As the powers of AI grows, so does the risk associated with their increased usage.

As gauged from three surveys by 80000 hours, it was found that AI researchers estimated around 15% chance of AI being very good and around 5% chance of AI being very bad. Apart from this, Deep Mind and Open AI, two of the very best AI research organizations have teams working on technical safety issues and ethics for AI

Also related to this would be power-seeking ai. The following are risks I would like to talk about:

  1. The possibility of us building an AI system that though misaligned still deployed
  2. Advanced planning systems that allow AI to remove the power from human hands

Intelligent Planning AI wants to improve its ability to effect change to reach its goals. People might be incentivized to deploy systems sooner and so in this scenario might overlook some key concerns not only of security but also of ethics and will try to ignore warning signs.

Apart from this, AI could be used to develop powerful new technology that can if not kill but at least cripple some other systems in place and give great power to hackers.

Section 3:

Possible Solutions

 One approach is to ensure that AI systems are transparent and accountable. For example, it should be possible for people to understand how AI systems make decisions. Additionally, AI systems should be subject to regulation. For example, there could be regulations on how AI systems can be used, or on what types of data they can access. Another approach is to encourage diversity in the development of AI systems. For example, AI systems should be developed by teams that include people from different backgrounds and with different perspectives. Ultimately, the decision of whether or not to use AI systems is a complex ethical question. There are risks associated with AI, but there are also potential benefits. The decision of whether or not to use AI systems should be made on a case-by-case basis, taking into account the specific risks and benefits of each situation.

To minimize the risks associated with ai development, governments, corporations, and individual users should take some measures. These include:

1. Establish clear and transparent guidelines for how ai systems should be used.

2. Make sure ai systems are properly secured against cyber attacks.

3. Monitor and review the effects of ai on society and the environment.

4. Educate the public about the risks and benefits of ai technology.

5. Create a regulatory framework that is flexible and able to respond to the rapid pace of ai development.

The best way to prevent an AI-related catastrophe is to research and understand the risks associated with AI technology. It is also important to develop policies and regulations that can mitigate these risks. For example, policies could be developed that restrict the use of AI technology in certain ways, such as banning the use of AI technology to create weapons. Regulations could also be put in place that would require companies to disclose how they are using AI technology.

Another solution would be to incentivize the identification of potential ethical risks, kind of like how we reward finding bugs in software. Some general principles that could be used in AI ethics include: –

  1. Respect for people and the natural world: AI should be designed to respect the dignity and autonomy of people and the natural world, and should not be harmful or destructive.
  2. Responsibility: AI should be responsible for its actions, and should take into account the consequences of its actions.
  3. Fairness: AI should be fair and equitable, and should treat people and other entities fairly.

There are many ethical considerations that researchers and developers of AI must take into account when designing and deploying AI systems. These considerations include the ethical issues raised by the development and use of autonomous weapons, the potential for AI to cause social and environmental harm, and the potential for AI to undermine human dignity. The ethical issues surrounding AI are complex and still largely unresolved. There is no single right or wrong answer to the question of how to best address these concerns, and no single approach is universally accepted. Instead, there is a range of possible solutions that may be appropriate in different situations.

One approach is to adopt a principle of harm reduction, which suggests that AI should be designed to minimize the potential for harm, even if that means sacrificing some of the benefits of AI. Other approaches involve adopting principles of human autonomy or beneficence, which suggest that AI should be designed to benefit humans and society as a whole, even if that means sacrificing some of the benefits of AI.

There are two broad approaches to tackling this:

  1. Technical and Tactical AI ethical safety research teams.
  2. AI policymaking and governance.

These would include working on actual practical implementations of making AI systems safe and in turn building cooperative AI. Apart from this, I would also advocate for more research into the intricacies of neural networks and making them more responsible, accountable and transparent.

We need to be able to prevent malicious use of AI both by Ai itself and by humans and work on the reliability and reproducibility of the ML system is being able to produce similar outcomes in the same set of data while working within the framework it is being deployed. Apart from this, accessibility and participation in data collection need to be ensured.

The transparency of the system needs to be maintained which entails explainability (being able to sufficiently describe how the system has reached a particular conclusion) and communication between different parts of the AI

Ethical ai needs to be considered a branch in itself that is inherently concerned with how AI will affect humans thus allowing us to maintain accountability of the AI and preventing mass destruction as a case in point. The key here can be creating an AI governance and board that takes into account all the aspects relating to the successful collection and use of Data and thus the moral rules to be followed by the system.

Criticisms

  1. AI will never be sufficiently advanced that we won’t be able to change or pull the plug on it
  2. We can just try and sandbox a potentially dangerous AI
  3. This problem would be too difficult to solve
  4. There is no true concrete evidence about the requirement of ai ethics.

Conclusion

Thus, In its extreme form, AI poses a risk of existential threat to humans. If AI exceeds human intelligence, it could design and create machines that are even more intelligent than themselves, and could ultimately outpace humanity's ability to survive. In this scenario, humans would no longer be the dominant species on Earth, and our future would be uncertain. All of this sounds too good to be true but as I have pointed out, this is an important space in which we need to research and thus develop better procedures to make AI more moral in itself by creating policies in favor of this and working as activists in this space.

Bibliography

  1. Preventing anAI-related catastrophe
    (
    https://80000hours.org/problem-profiles/artificial-intelligence/)

  2. Ai ethics by IBM
    (
    https://www.ibm.com/cloud/learn/ai-ethics )
  3. A practical guide to building ethical AI (https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai)
  4. A high-level overview of AI ethics (https://www.sciencedirect.com/science/article/pii/S2666389921001574) 
  5. https://blog.salesforceairesearch.com/ethics-in-ai-research-papers-and-articles/ 

Acknowledgment

Atul Dosi