Introduction
Facebook has the ethical duty to rescue victims of crime. In the case study, Facebook somehow failed Robert Godwin Sr because the intention to murder was broadcast before the murder actually took place. Had Facebook reacted faster to the initial posts of Stephens, perhaps the police would have thwarted the murder of Godwin. Therefore, it would have been ethically responsible for Facebook to promptly share information of the intended murder with security agencies within the vicinity of the location of Stevens.
Ethical responsibility stems from the fact that crime committed with the help of Facebook applications directly or indirectly affects its more than 1 billion users worldwide. Being ethical entails exhibition of a behavior that upholds human dignity. By watching users suffer helplessly, Facebook fails to appreciate the value of humanity. It is also contrary to the objective of Facebook which includes creating tools that enable people to connect and derive happiness through interaction and community. Moreover, premeditated murder, rape, sexual and sexual assault are heinous crimes which attract heaviest sentences in criminal justice. Staying aloof regarding the crime committed with the help of its tools, Facebook stands to share the blame for the crime. As Benzmiller (2013) argues, complete response to cybercrime must hold the perpetrator responsible and the bystander who, through their aloofness, contributes to the power of the criminal and the victim's isolation.
Proactive and Thorough Response to Offensive Material
Adoption of more stringent rules and regulations regarding the use of the social networking sites can give them more leverage to proactively deal with content posts. Social media networks put considerable emphasis on privacy of users. They consider freedom of speech as an integral part of their survival. However, evidence from the recent scandal relating to the activities of Cambridge Analytica shows that privacy rules are not adequate enough to prevent posts that are offensive. For instance, Facebook's Community Standards gives too much space for users to post content that is offensive on the site (Isaac & Mele, 2017). Thus, tightening regulations relating to photo and video sharing can make users more accountable through self-regulation.
Tougher screening of new users can play an important part in reducing the amount of offensive information on the social networking sites. The social networking companies know better regarding the areas that can be strengthened to hold users liable for their activities on social media. Notably, enhancing the signup requirements whereby prospective users are required to provide information that is as factual as possible to enable internal controls to promptly locate users and institute internal disciplinary actions, could reduce offensive posts. The strategy can be strengthened by enhancing criminal training for human moderators who review the site content to ascertain suitability for further distribution. As it is today, people sign up for social networking sites using even fake accounts which they then use to commit crime or post offensive material. The problem is compounded by the practice that review of posted material mostly focuses on violation of privacy policies of the organizations and not criminal standards.
Posting adverts that warn users about the danger of revocation of their status as a member of the sites and also about the criminal liability of making certain posts has the potential to promote responsible use of social media platforms. An overwhelming majority of ads that are broadcast on social networking sites related to commercial opportunities and individual privacy. That is to say, there is limited focus on the safety of the sites in the context of user responsibility and criminal liability. For these reasons, a strategy that informs users of their responsibilities can be more proactive not only in deterring offensive posts but also in enhancing reporting of offensive content.
Safeguards That Can Be Used to Prevent Broadcasting of Violent Content
Investment in artificial intelligence (IA) techniques that screen video content in real-time can be helpful in dealing with the problem of violent broadcasts on social networking sites. Currently, Facebook, for example, relies on alerts from users to flag broadcasting of objectionable material on its site (Isaac & Mele, 2017). The ineffectiveness of the third party review process explains the reason as to why it took 1 hour and 45 minutes to pull down the live broadcast of the shooting of Godwin. AI tools would enable the live tools to detect graphic videos and impose censorship in real-time. For instance, IA tools can be developed to detect pain and blood as a way of identifying the suitability of a scene for a live broadcast.
Tightening the rules on when the Facebook Live tool (or any other live tools for the case of other social networks) may be used could be an effective way of countering the broadcasting of graphic and violent material on social media. For instance, requiring that people reveal their location, event or reason for the live broadcast may be useful in alerting intelligent monitors about an impending live stream. After getting the alert, internal monitors can assess the live stream for a few seconds to ascertain whether the details given conform to what is happening before it is eventually allowed to go live. Essentially, this strategy aims at creating awareness to monitoring teams about all streaming broadcasts so that they can act before viewers them.
Ethics Officer or Ethics Committee at Facebook
A search about ethical practices and positions at Facebook reveals that the firm does not have an ethics officer or an ethics committee. It is a well-considered position that Facebook needs to have either of the above-mentioned positions. In many high-tech organizations, leaders consult various categories of people when they come up with new products. Business analysts, lawyers, engineers, sociologists, and philosophers advise company leaders on the impact of certain decisions on consumers. The weakness of relying on such group of individuals is that they tend to focus more on the potential liability of decisions on the company and profitability and not the interests of the consumers. For instance, why did Volkswagen cheat on emissions while projecting itself as a leader in environmental sustainability? The answer simply lies in the overemphasis on profits and not ethical standards. The case may not be different at Facebook.
Facebook further needs an ethics officer because evidence suggests that there are internal issues on adherence to ethical standards. For one, the activities of Cambridge Analytica where data of more than 50 million Americans was obtained without their consent, is evidence of the need for a chief officer of ethics. Additionally, Russian trolls that spread anger during the 2016 presidential campaigns is another illustration of low ethical standards are observed at the company, at least in the context protection customer data. The extent of the problem was evidenced by the resignation of Alex Stamos who cited lack of interest among the top leaders of the company in enforcing high standards of ethical practices (Perlroth, Frenkel, & Shane, 2018).Such overwhelming evidence suggests that Facebook urgently needs an officer who would lead the overall ethical policy of the company.
How Facebook can encourage Ethical Behaviors
Facebook needs to focus more what matters to the company- the interests of its more than 1.5 billion users. This would ensure ethical policy is customer-centric. A customer-focus approach means that issues of customer data are given top priority to foster trust among users that their privacy is protected. As a result, they would reciprocate in assisting Facebook to promote responsible behaviors.
Effective communication can also ensure ethical use of social media. This may include ads that educate users on the consequences of posting offensive material. Communication can be enhanced through regular exchanges between internal moderators and users. For example, it should not take many user alerts about irresponsible use of social media for internal moderators to respond or investigate.
References
Benzmiller, H. (2013). The cyber-Samaritans: Exploring criminal liability for the "innocent" bystanders of cyberbullying. Northwestern University Law Review, 107(2), 927-962.
Isaac, M., & Mele, C. (2017). A murder posted on Facebook prompts outrage and questions over responsibility. The New York Times [New York].
Newcomb, A. (2017, April 17). Facebook has a unique challenge in policing depraved videos. Retrieved from https://www.nbcnews.com/tech/social-media/cleveland-shooting-highlights-facebook-s-responsibility-policing-depraved-videos-n747306
Perlroth, N., Frenkel, S., & Shane, S. (2018). Facebook Exit Hints at Dissent on Handling of Russian Trolls. The New York Times [New York]
Cite this page
Research Paper on Facebook Ethical Obligations. (2022, Jul 08). Retrieved from https://proessays.net/essays/research-paper-on-facebook-ethical-obligations
If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:
- How Writing Can Be Used to Deepen Understanding - Essay Example
- I Pagliacci Opera Analysis Essay
- The Confucianism Effect on Chushingura Paper Example
- Film Analysis Essay on "Enough"
- Gender and Identity in the Nickelodeon Television Show - Essay Sample
- American Racial Patterns of Popular Music Essay
- Essay Sample on Evolution of Narrative Art in Film Industry: 40 Years and Counting