Ethical dilemmas in the area of advanced robotics and artificial intelligence have seemed like a distant matter left to the scope of science fiction movies. However, with their reality in sight, it is time for these ethics to be explored seriously. Having resigned ourselves to the fact that ethical decisions such as life and death decisions will be undertaken by machines, the next best thing is to try, to the furthest extent possible, to design algorithms with a clear and pure a moral compass which can form the "morality engine" for these intelligences. Coming up with such a morality engine will necessitate the synthesis and collation of the entire world's populous, to create an amalgam of moral code that can then be written into the core autonomous vehicles' algorithms to enable computation as to the ethical decisions they can make. Self-driving cars will thus make ethical decisions concerning life and death according to the moral limits, parameters, and tolerances that a majority of people can live with. I propose that, in navigating the minefield of moral artificial intelligences' actions and decisions, the morality algorithms will largely be a set of parameters that represent the collective altruistic sentiments of people balanced by the option to have some form of control in the event of life-threatening situations.
Car buyers, the primary stakeholders where driverless cars are concerned, are faced with a number of conundrums. No one would be disposed to buy an autonomous vehicle whose algorithms sacrifice the car and themselves in the face of an impending accident. This is simply because no one wants to lose their life. Much like the algorithms that would be incorporated in these cars, self-preservation is one of the deepest, reflex traits imbued in humans. The problem: in the course of pursuing altruism, is one prepared to lose their life? A good number of respondents in the study stated that they would not buy such cars nor support any enforcement to buy them (Kaplan). The solution seems to lie in two parts: one, there should be an aggressive and continuous improvement of morality algorithms by scientists and car makers to make autonomous vehicles capable of making the best choices in impossible situations. Part of this means that algorithms should be perfected in such a way as to be able to make calculations and predictions, while incorporating environmental data to determine how the lives of pedestrians might be saved, for example, while sparing the car's passengers as well. This can happen by creating an algorithm that allows the vehicle's computer to factor in things such as speed, obstacles and the car's own protective features and build to determine where and how to crash where, for instance, the only option to save pedestrians would have been to swerve and kill the passenger(s) alone.
Car buyers may also demand that onboard autonomous vehicle computers be capable of such analysis as to be able to predict a potentially dangerous situation and offer the driver manual control if they desire it. This seems to be a thin line where drivers would be comfortable holding the destiny of their lives in their own hands. Such an option, of course, opens up a host of problems. For example, what classification of potential danger warrants drivers to have manual control and against which altruistic criteria should such control be waived? These and other such questions need to be pondered and analyzed carefully. Anyone is capable of accepting that computers, by far, are capable of faster, more accurate and timely decisions than a human being. However, drivers are programmed to dismiss the ability of a machine to make the most rational and ethical choice when faced with an extreme situation. Aside from the vanity that precludes humans from accepting that machines are better than them, the most important issue is whether the choice to take another human being's life should be placed in the domain of a machine. Artificial intelligences, however intelligent, are still machines and it is a tough sell that a robot or computer could be responsible for one's demise.
Car manufacturers should also ensure that cars are made to the best possible standards they can be to provide a relative guarantee of structural integrity that assures the safety of passengers in the event of accidents. What many people fail to recognize in I Robot is that the cars were very structurally sound, unlike many of today's vehicles. Currently, the decisions to make a car's outer shell are dependent on many factors such as aerodynamic quality, lightness, hence speed, cost of metals and so on. In a future where autonomous vehicles are in plenty, some of these factors such as lightness and aerodynamic ability can be dismissed since they only appeal to the vanity that is the pursuit of speed and fuel consumption considerations (we assume that non-renewable fuels will not be a factor). As such, car manufacturers will have one major consideration- the cost of production which is influenced largely by the cost of raw materials, mainly metals. Manufacturers must be made to adhere to the strictest ethics in designing cars capable of withstanding a reasonable degree of impact while incorporating the highest in safety features.
Government regulation is risky to contemplate because once set, it becomes hard to revise, but it is nevertheless an inescapable reality. However, a strong push should be made towards basic regulation, that is, the setting of minimum safety standards for car manufacturers. This is so that they can then be motivated by the forces of free markets to compete for and design better cars that not only adhere to regulatory statues but continuously seek to improve autonomous vehicles. Where official regulation sets the standards of autonomous vehicle production safety standards and algorithms settings, manufacturers will likely lobby to keep them at a certain level in a bid to keep down costs. The customer will be the loser in such a scenario, being incapable of influencing policy and unable to abstain from purchasing cars altogether. Another reason for the call to minimum government involvement is the usual government bureaucracy and incompetence. Many studies have shown that most governments tend to be complicit in crimes against their citizenry, largely by negligence and ineptitude. Government regulation, therefore, should mostly exist in an oversight capacity once minimum standards have been established to allow market forces to propel innovation and improvement.
Despite this, governments will more than likely push to regulate the autonomous vehicle market heavily. The most conspicuous arguments will be for the enforcement of safety regulations. The most touted reasons from time immemorial are that private corporations do not have the best interests of the citizenry at heart. The pursuit of other altruistic goals such as the reduction of road accidents will also feature as a key objective. But it is not uncalled for to imagine that governments will desire to regulate a potentially booming market in autonomous vehicles, largely for profits, and for other reasons, even, rather sinisterly, for mass surveillance purposes.
In considering all these stated points, one can arrive at the "best" policy regarding autonomous vehicles by employing certain criteria: 1) AI ethical parameters should be based on a morality engine that reflects the ethical and moral peaks of all humans, 2) altruism, and particularly the "trolley problem" with its various complexities, should be a key coding in AI algorithms but be subject to extenuating conditions that may be determined by studies, focus groups and so on, 3) autonomous vehicles should be rolled out only when the ability of these systems to make the best impossible choices in the most extreme cases is certified, 4) drivers should push for a high degree of protection from car manufacturers in terms of safety features, structural integrity and car body materials, and 5) government policy regarding regulation should be flexible enough to demand and ensure continuous improvement and innovation in the area of advanced robotics as pertains to autonomous vehicles.
Kaplan, Sarah. What If Your Self-Driving Car Decides One Death Is Better than Two - and That One Is You?28 Oct. 2015, www.washingtonpost.com/news/morning-mix/wp/2015/10/28/what-if-your-self-driving-car-decides-one-death-is-better-than-two-and-that-one-is-you/?utm_term=.99082060ac25.
Cite this page
Essay Sample on Autonomous Vehicles and Ethical Implications. (2022, Apr 12). Retrieved from https://proessays.net/essays/essay-sample-on-autonomous-vehicles-and-ethical-implications
If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:
- Why We Need the Police in the Society
- Social Vision of the Soviet Union, Republican Spain and Rebel Spain
- Strain Theory and Child Abuse Essay
- Analysis of Controversial Matters in the American Society
- Policing Strategies in Hot Spot Regions - Literature Review Example
- Is it Ethical to Engage in Office Gossip? HRM Essay Example
- Position of the UK as a Global Leader in Artificial Intelligence Innovation