AI and Society: Ethical Guidelines for Coexistence - Essay Sample

Paper Type:  Essay
Pages:  7
Wordcount:  1666 Words
Date:  2023-03-13

Introduction

The arrival of artificial intelligence and other technologies has made human society to come up with rules and guidelines that will ensure a cohesive existence between the two. Artificial intelligence has been used in various ways, like facial recognition and autonomous vehicles. Artificial intelligence is used to program these vehicles, which in turn help them to make a significant decision. However, some of these decisions have come to conflict with the ethics of society. The paper will discuss the major ethical issues involved in using artificial intelligence in autonomous cars.

Trust banner

Is your time best spent reading someone else’s essay? Get a 100% original essay FROM A CERTIFIED WRITER!

The biggest ethical question of the current generation is how artificial intelligence should program an autonomous car during an accident. In short, the primary concern is who should be saved during a crash. This dilemma is a result of various theories, laws, and belief systems, among others. The autonomous vehicles are made in such a way that they will automatically decide who to save during an accident. In 2016, an online survey called Moral Machine was conducted to get people's opinions on artificial intelligence in autonomous cars. Participants were asked to choose who should be saved from a long list of people and animals. People's decision on who should be saved was as results of major factors.

The consequence of the action brings a dilemma in deciding who to save. Both the life of a driver and that of the pedestrians are important. Saving the driver would mean that the car has to kill a lot of pedestrians (Petrazycki and Trevino, 2017). According to utilitarian theory, an action is considered to be right or wrong, depending on its consequences. An effort that brings more benefit to the larger group is considered morally right, while that which brings more harm is deemed to be immoral.

On the one hand, the action will cause a lot of suffering and pain to the majority. The pedestrians killed maybe parents of young children, or they may be the breadwinners. On the other hand, the driver is also important because he is a human being, and his death will also cause pain to their family. Therefore, the utilitarian will argue that the programmer should save the lives of pedestrians, which will make one question the respect of people's right to life, as indicated in the bills of power.

Cultural context possesses a dilemma for the programmer. According to Nowak (2018), various cultures hold different opinions regarding death and other pressing issues. The developed countries like America and Europe hold the individualist view. The viewed weighs a person's worthiness depending on their achievement in life (Petrazycki and Trevino, 2017). Influential people are regarded as more important than the rest of society.

On the other hand, collectivist culture tends to prioritize the society's needs over individuals. Since it is expected that autonomous cars will soon be used all over the world, the question arises on how the programmer should strike a balance between the two cultures. A collectivist will view the programmer's action of saving the life of a driver as morally wrong while and saving the pedestrians as ethically right. On the other hand, individualist will see the action of saving the majority as immoral and that of saving the driver as right

The artificial intelligence also faces a dilemma during an unexpected situation. Since the car is not a moral agent, it will be impossible for it to make the right decision at all times. For example, the vehicle may be programmed in such a way that it should always choose the life of children (Petrazycki and Trevino, 2017). However, there are scenarios when such a decision is hard to make. The car may be presented with a situation whereby veering to the left or the right to evading hitting a child crossing in front will mean killing other children on the roadside. On the other hand, driving straight on the road will make the car hit the child crossing the street. The vehicle has to choose which child is more important. Such a scenario makes one wonder how well the autonomous vehicles are prepared to handle the many dilemmas.

The belief system poses a question on the decision of who to kill. According to divine command theory, the morality of a subject is judged depending on the commandants given by God. One of the commandments forbids a person from killing. Also, the life of all individuals is in the hands of God. It is only him who can decide who to live or die. In such a case, the programmer will be taking the place of God by choosing who to live or die. It is estimated that there are over 80% of Christians in the world (Nowak, 2018). This statistic means that the decision to make the car decide on whom to kill is against the morality of society.

On the other hand, the existential theory suggests that the decision of what is right or wrong is in the hands of an individual. This means that an individual's decision is regarded as moral. The scenario possesses the programmer with the dilemma of deciding who to obey. Obedient to god's law will mean leaving the whole situation unattended while disobeying god means going against the societal norms.

The place of a constitution in making the decision is still unclear. According to the bill of rights, every person has a right to life. Currently, Germany is the only country that has laid down rules regulating the operation of autonomous vehicles - making the decision of who to save may be seen as breaking the law (Nowak, 2018). This issue puts the programmer and the driver in a dilemma. At the end of the day, they have to make one decision or the other. Since there are no other guidelines dictating the operation of the self-driving car, the programmer and the broad society still wonders whether abide by requirements of bills of rights, or they should make a decision based on their judgement. The situation reveals that there is need to have clear guidelines on such issues before the vehicle can be allowed to operate in the market.

The ability of artificial intelligence cars to use human signs poses a dilemma. The autonomous vehicles will be operating on roads with signs designed for human use. It is expected that these signs will affect the operation of the car. Various scenarios can be used to demonstrate this dilemma. The traffic lights are made in a way that they require a human driver to stop or move at a certain time. In many cases, human drivers do not obey these rules, and instead, they use the road according to the needs and the situation at hand. On the other hand, the self-driving car is made in a way that it obeys the rules as they read, and it does not have a chance of altering them to suit the situation (Nowak, 2018). Artificial intelligence car may be in a dilemma in case the traffic lights indicate it's time to stop, and the human driving is still driving. The self-driving vehicle will be confused because if it drives when it's time to stop, it may be arrested. On the other hand, if it stops moving, it may cause traffic or accidents with the human cars behind it.

The principle of double effects also contributes to the dilemma. The principle of double effect explains that it is right to indirectly cause harm if it benefits the majority (Friedman, 2008). On the other hand, the principle of non- maleficence states that a person should not inflict pain on an individual based on the benefits it will bring. The two principles possess the programmer with a hard decision to make. On the one hand, the programmer may argue that programming the autonomous car to kill someone does not inflict pain directly to any person because he is not the one who will make the actual killing. On the other hand, the principle of non-maleficence argues that inflicting pain is morally wrong whether and individual is doing it directly or indirectly.

The intention of the programmer is also considered. After a deep evaluation of the situation, a programmer may decide to program the vehicle in a way that it will kill the pedestrians and save the driver (Friedman, 2008). The question that arises here is the real intention of the programmer. On one side, the programmer will feel guilty of killing a driver who is close to him. On the other hand, the programmer is also guilty because society will not understand how killing pedestrians in favor of a driver is a good intention. The guilty will be even more because the collective society will see him as a selfish individual. Though the driver may have acted from sound judgment, he does not have the evidence to show the society that his decision was from good faith.

Conclusion

In conclusion, information technology has come to bring many ethical issues that need to be addressed. The ethical use of artificial intelligence in helping autonomous cars to make a decision is largely discussed. The cars are programmed to make major decisions that deeply touch the human family. Using artificial intelligence to decide who lives or dies is a major concern. Christians question the place of God in giving and taking away life as a major dilemma. The decision about whether an individual should live or die should be left in the hands of God. The ability of the artificial intelligence car to make the right decision in a road dominated by human vehicles is questionable. Artificial cars are confused about who to obey on the road since technology seems to conflict with humans making.

References

Nowak, P. (2018). The ethical dilemmas of self-driving cars. Retrieved 5 December 2019, from https://www.theglobeandmail.com/globe-drive/culture/technology/the-ethical-dilemmas-of-self-drivingcars/article37803470/

Friedman, M. (2000). Autonomy, social disruption and women.

Petrazycki, L., & Trevino, A. J. (2017). Law and morality. Routledge.

Lin, P., Abney, K., & Jenkins, R. (Eds.). (2017). Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press.

Cite this page

AI and Society: Ethical Guidelines for Coexistence - Essay Sample. (2023, Mar 13). Retrieved from https://proessays.net/essays/ai-and-society-ethical-guidelines-for-coexistence-essay-sample

logo_disclaimer
Free essays can be submitted by anyone,

so we do not vouch for their quality

Want a quality guarantee?
Order from one of our vetted writers instead

If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:

didn't find image

Liked this essay sample but need an original one?

Hire a professional with VAST experience and 25% off!

24/7 online support

NO plagiarism