Introduction
AI affects people's lives both directly and indirectly owing to its numerous applications ranging from speech translators, bots and messaging tools to driver-assistance systems. People can undertake their duties faster and more efficiently through these applications, while private companies and government agents utilize them to coordinate and execute complex tasks. Nonetheless, despite these benefits, AI poses various challenges that can potentially harm users or deter them from accomplishing their planned activities. This paper explores these challenges and their connection with company accountability, then describes and analyzes possible solutions to these challenges.
The Challenges Posed by AI
One of the main challenges that AI poses is data security. Harkut and Kashmira argued that AI capabilities such as decision-making and machine learning often utilize large sets of sensitive and classified data that expose AI applications to identity theft and data breaches (par. 10). Companies that are hell-bent on making profits tend to exploit AI applications that are interconnected worldwide. However, these applications form complex AI networks that companies cannot monitor and control. Another key challenge is algorithm bias. For AI-based tools to make decisions, they are trained to use data and algorithms. If biased data, for instance, based on ethnic, gender or racial lines, is fed in these tools, they will likely make prejudiced and unethical decisions that will be accentuated if the biased data continues to be fed into the AI systems (Harkut & Kashmira par. 11).
These challenges are likely to derail company accountability. Essentially, company accountability implies that people affected by various company activities can hold those companies responsible for their activities. The above challenges tie to company accountability in that the accountability gap between the developers or beneficiaries of AI-based systems and the people most vulnerable to negative consequences of AI such as data breach and identity theft continues to widen. Power imbalances between companies and their clients or employees and inadequate managerial frameworks within technology firms create fears about discrimination, biases and who should be held accountable for the harms caused by AI applications (Whittaker et al. 7).
Two Possible Solutions
One solution proposed by Crawford et al. is the enactment of biometric privacy laws to control data access for both private and public entities (8). Fingerprints and DNA are some of the biometric data that are exposed to unsafe AI applications. The implementation of privacy laws, such as the Biometric Information Privacy Act of Illinois, allows people to litigate a private actor for any unauthorized gathering and usage of biometric information for profiling, tracking or surveillance. Another solution is that studies on AI bias should cover more than just technical solutions (Crawford et al. 6). These studies should not only encompass statistical parity but also the wider politics and impacts of AI. The studies should put more emphasis on examining social-related topics such as disability and critical race issues, the construction of classification and differences and the ultimate effects.
An Analysis of the Effectiveness of the above Solutions
The first solution of enacting biometric privacy laws is effective to a great extent. Such laws grant individuals, whose biometric information is taken and used, the power to sue the AI developer or person who takes this information if they feel vulnerable to data breaches or identity theft. Considering, for instance, that the one who takes this information is the government or an employer to have a specific identifier for each person or employee, they must guarantee the safety of each person' biometric data. Crawford et al. gave the example of Estonia that had a security error in their ID system (41). When launching this ID system, its proponents claimed that it was advanced technologically and would respect people's privacy. However, its security weaknesses rendered the system vulnerable to identity theft and breaches of the biometrics, thereby compromising the safety of Estonian citizens.
With a strict privacy law, nonetheless, the developers of the ID system would be obliged to make it safer since they would have to deal with possible lawsuits filed against them by Estonian citizens if they design a faulty system. Similarly, an AI developer, for example, is contracted by a social media company to design an addictive app that attracts users to the company's services and products. In case a user feels that the algorithm used in developing the app jeopardized his data, the privacy law will allow him to take legal action against the developer and the company managers, which may earn them jail terms eventually. As such, establishing privacy legislation would mitigate effectively the risk posed by AI on data security.
For the second solution, it aims at addressing issues of AI bias. This solution is also effective since its proposal that AI bias researches should also focus on social topics such as race, gender disparities and disabilities touch the core social classifications that cause bias. On the issue of gender disparities, for instance, Crawford et al. stated the results of a research on speakers at an AI conference, which revealed that only 18% of these speakers were women compared to 80% who were men (46). With these findings, AI developers can design a conferencing tool that balances the number of men and women speakers in a conference. The developers can also consider the representation of people from different ethnicities or race as well as persons with disabilities when developing the tool. As such, knowledge of social patterns and constructs that developers will gain from this solution will enable them to mitigate AI bias.
Works Cited
Crawford, Kate, et al. AI Now 2019 Report. New York: AI Now Institute, 2019, https://ainowinstitute.org/AI_Now_2019_Report.pdf. Accessed 16 Mar 2020.
Harkut, Dinesh G., and Kashmira Kasat. "Introductory Chapter: Artificial Intelligence-Challenges and Applications." Artificial Intelligence-Scope and Limitations. IntechOpen, 2019. https://www.intechopen.com/books/artificial-intelligence-scope-and-limitations/introductory-chapter-artificial-intelligence-challenges-and-applications. Accessed 16 Mar 2020.
Whittaker, Meredith, et al. AI now report 2018. AI Now Institute at New York University, 2018, https://ainowinstitute.org/AI_Now_2018_Report.pdf. Accessed 16 Mar 2020.
Cite this page
Essay Sample on AI: A Double-Edged Sword - Benefits and Challenges. (2023, Apr 24). Retrieved from https://proessays.net/essays/essay-sample-on-ai-a-double-edged-sword-benefits-and-challenges
If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:
- Cyber-Crime, Data Security and Privacy Concerns Annotated Bibliography
- Should Kids Have a Smartphone? Essay Example
- Issues Raised by the Intervenor on Trans Mountain Pipeline Expansion to the NEB
- Mobile Phone Technology - Essay Sample
- Essay Example on AI Moves to Center Stage: Robotics to Replace Human Effort
- Essay on Personality Traits & Problematic Internet/Smartphone Use
- Woodrow Hartzog's "Privacy's Blueprint: The Battle to Control the Design of New Technologies"