Blinn College AI Social Media Content Moderation Case Study

User Generated

Uryctrzvav_

Writing

Blinn College

Description

The paper will be a minimum of 10 pages, double-spaced with 12 Point Times New Roman type. The cover sheet and your citations are not considered part of the 10 pages. Citations/ references can be found using library services through the internet. Typically, you can use “keyword” searches to access appropriate materials. You can also ask a reference librarian at the university or public library. Support or argument about the ethical/ moral issue. The 10 pages do not count the Title page nor the Citation/ Reference page. You will be required to provide a minimum of 3 "academically" acceptable references for your sources.

Unformatted Attachment Preview

ETHICS IN TECH PRACTICE: Case Study: AI Social Media Content Moderation ©2018. This document is part of a project, Ethics in Technology Practice, made possible by a grant from Omidyar Network’s Tech and Society Solutions Lab and developed by the Markkula Center of Applied Ethics. It is made available under a Creative Commons license (CC BY-NC-ND 3.0) for noncommercial use with attribution and no derivatives. References to this material must include the following citation: Vallor, Shannon, Brian Green, and Irina Raicu (2018). Ethics in Technology Practice. The Markkula Center for Applied Ethics at Santa Clara University. https://www.scu.edu/ethics AI Social Media Content Moderation By Brian Patrick Green A social media company is having trouble with political actors manipulating the flow of information on its service. Specifically, certain governments are producing tens or hundreds of thousands of fake accounts to promote government propaganda, thereby attempting to swamp any news which does not fit the government’s perspective. The social media platform is attempting to respond by using machine learning to determine which accounts are fake, based on their activity patterns, but the adversary governments are responding with their own machine learning to better hide those patterns and impersonate real users. For every batch of fake accounts deactivated, just as many seem to pop up again. Furthermore, the machine learning algorithms are imperfect, and balancing false-positives with false negatives can lead to deactivating real people’s accounts, leading to anger, frustration, and bad publicity for the company. On the other hand, scaling back to avoid false positives leads to more fake accounts slipping through. What should the company do? What ethical questions should they consider? How might the questions below inspire perspectives on this problem? Discussion questions: 1. What unique ethical concerns does this effort entail ? 2. Who are the stakeholders involved? Who should be consulted about the project’s goals and development? 3. What additional facts might be required? What practical steps might you need to take in order to access the information/perspectives needed to manage the ethical landscape of this project? 4. What are some of the ethical issues that any designers/developers involved in this a project need to address? 5. How might this effort be evaluated through the various ethical 'lenses' described in the “Conceptual Frameworks” document? 6. In this project, what moral values are potentially conflicting with each other? Is there any way for the disagreeing sides to reconcile or does success for one necessarily mean failure for the other? 7. As a project team, how might you go about sorting through these ethical issues and addressing them? Which of the ethical issues you have identified would you prioritize, and why? 8. Who would be the appropriate persons on a team to take those steps? At what level, and by what methods, should decisions be made about how to manage the ethical issues raised by this project? Published by the Markkula Center of Applied Ethics under a Creative Commons license (CC BY-NC-ND 3.0) 1
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Hey buddy, please find attached

Running Head: MODERATION OF SOCIAL MEDIA CONTENT

Moderation of Social Media Content through AI Technologies
Name
Institution Affiliated

1

MODERATION OF SOCIAL MEDIA CONTENT

2

Introduction
Based on an article by Brian Patrick Green, he notes that social media sites and
companies have a challenge with the politicians who manipulate the flow of information the
firm's service. Evidently, during voting periods, some government entities produce thousands of
pseudo or fake accounts to promote propaganda. Hence such news tramples on any truthful
communication aiming at criticizing the state's perspective. Thus, the companies aim at getting
back through the use of machine learning and artificial intelligence, making it hard to manage
fake accounts. However, the entities by such governments also retaliate by developing their AI's
that mimic the actual owners (Dang et al., 2018). Thus, when a single pseudo account is
deactivated many others pop up again. Also, the algorithms for machine learning are not perfect,
and determining the false positive and false negative can lead to the deactivation of real
accounts. Occasionally this leads to frustration, anger, and undesirable imaging of the company.
The paper looks at how the AI moderation in social media may lead to impending ethical
concerns.
The Unique Ethical Concern
Two main concerns occur when dealing with issues of AI moderation of social media.
First, users put high expectations on AI performance in comparison to human moderation. The
most critical concern facing AI moderation is the public's perception. The systems analyses data
in higher volumes and most cases outperform humans (Parks, 2019). However, despite the high
efficiency rate, many users are swift to point hand to a few systems that may fail. A simple
illustration is when an AI differentiates the false negative and false positive accounts; they may
delete a real user profile. The demeaning of the process will start though humans could have
made the same error or even worse. Thus, for the companies, it is significant to align the

MODERATION OF SOCIAL MEDIA CONTENT

3

expectation of AI and public perception. It is unwise for a company to elevate the bar too high
for the AI process to a level that when it messes the public backlash becomes unbearable.
The second unique ethical concern is that measuring the performance level of moderation
is quite a challenge. For many artificial intelligence systems, it is possible to examine techniques
through the application of a benchmark problem or dataset. The process allows a comparison
between architectures and algorithms using quantitative methods. However, two concerns arise
while using this approach.
First, the benchmark problems or datasets are due to the careful selection of the realworld representative like some traits of previous real profile or face profiles. The sets are then
fed to a computer for comparison and learning proceeds. Nonetheless, the world changes at a
swift pace and the datasets after some time may not be accurate because of the occasional
evolution. Second, a phenomenon termed ‘over-fitting’ might occur. The situation leads to the
over-optimization of the collected data, but it does not cope well with new and previously unseen
data. It is a common problem in many AI-based settings and several companies always try hard
to reduce over-fitting (Hartman, 2019). Malicious users might use the technique to exaggerate
the performance of AI hence cause a severe menace to real-world performance.
Stakeholders and Objectives
On the discussion of the application of AI in moderation of social media, there are many
stakeholders. All have their objectives, but most of their aim is to reduce the manipulation and
breach of information by malicious entities. No limit is available to showcases who can be a
stakeholder, but the three leading role players in this discussion are the platforms, law enforcers,
and the users. For further distinction, other conversation movers in this discussion include
newsrooms, the advertising agencies, and the oversight authorities. The entities named create the

MODERATION OF SOCIAL MEDIA CONTENT

4

discussion involving AI moderation of social media, and their contribution aids the process of
policymaking.
In the progress of developing the goals of the project, all facets should get a chance to
express their concerns. Different groups may have priorities that conflict. For instance, the
metrics that the newsrooms are applying may not align with the parameters suggested by the
politicians. Hence, in dealing with these issues, all entities must have sober minds; they should
be able to compromise for the safety of all parties using the sites. Transparent platforms,
advertisements, and even publications mandate principles and guidelines that are clear to the user
without leaving any room for ambiguity. User education is sophisticated not solved efficiently
with the popular 1-3 minutes online videos present in numerous sites (Ruckenstein & Turunen,
2019). There is a need for core-curriculum, primarily developed by oversight agencies and the
specified platforms to promote media literacy and the importance of merging AI technology and
human moderation.
Additional Facts and Practical Approaches to Access Information to Manage Ethical
Landscape
In dealing with ethics, numerous facts exist in accessing information for developing an
ethical landscape. However, an organization or individual must come up with steps of managing
perspectives and the moral background of specific projects. In dealing with this issue, first, the
harmful content must be elaborated so that the moderation application is useful. The internet is
ubiquitous place, characterized by diversity in social media. The moderation of user-generated
content requires separate consideration of content; hence, it is crucial to apply a technical
architecture (Li et al., 2018). The characterization forms the basis of access to gathering
information on the ethical landscape.

MODERATION OF SOCIAL MEDIA CONTENT

5

The second step is setting aside content which requires the understanding of the context.
Then the manager decides if there is a need for moderation, if the content easily expresses sexual
activities, graphical content, or even propaganda. However, some harmful content needs that one
study the context of the situation to see if they violate the regulation of the site. Hence to
moderate such content, both human moderators and AI systems need to work hand-in-hand to
analyze the content. Systems should check for the historical background and relate it to the
current post before making the decision.
Third, check on the difference between state or national laws before concluding.
Different states and countries have different rules on sexual content, propaganda, and graphics.
But the internet is global, and people from different diversities and cultures relate daily while
sharing multiple information. But the data shared between different users might offend or look
illegal to others nations and states, where there are no laws against such posts. For instance, the
advertisement for the sale of firearms is not legal in the UK but legal in the US, and holocaust
denials are not legal in Germany but legal in many other countries (Thebault-Spieker et al.,
2016). Thus, before accessing the information and coming to a decision for a project, one should
look at these differences.
Ethical Issues Designers/Developers in the Project
As a designer, developer, or project manager, many ethical issues occur during a project,
especially when dealing with AI moderation of social media. The first major issue is determining
narrow vs. general AI and their differences. The general AI can quickly adapt to new
approaches; hence, it becomes more efficient in learning future objectives. Its main aim is
solving tasks that even the creators never anticipated. On the other side, narrow AI only performs
specific duties, but that doesn't mean it cannot learn. General AI can produce better results, but

MODERATION O...


Anonymous
I use Studypool every time I need help studying, and it never disappoints.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Related Tags