The internet – while it provides us with added convenience and countless opportunities for learning, connection and entertainment – is home to a variety of threats, including phishing scams, identity theft and ransomware. In addition to what we typically think of when considering cyber threats and risk, are digital civility, the ways we treat one another online and the information – or misinformation – we choose to share digitally. In an effort to examine current technology issues and trends, the George Washington University (GW) Law School’s Global Internet Freedom and Human Rights Project hosted a seminar, “A More Perfect Internet: Addressing Digital Incivility, Cyber-Violence and ‘Fake News.’” The event, supported by the Microsoft Technology and Human Rights Center, was held March 16 at GW Law in Washington, D.C., and featured two panel discussions among students and esteemed experts.
In the first discussion, “Promoting Digital Civility and Combating Cyber-Violence,” Jacqueline Beauchere, Microsoft’s chief online safety officer and a former chair of the NCSA Board of Directors, highlighted Microsoft’s recently launched Digital Civility Index, which measures people’s perceived safety online and exposure to risks. Microsoft surveyed teens and adults around the world about the state of digital civility and their perceptions of their exposure to behavioral, reputational, sexual and personal/intrusive risks. The findings were enlightening, with 65 percent of adults and youth reporting exposure to online risks – and 78 percent reporting that their friends and/or family members had fallen victim to risk. Top online risks included unwanted contact (43%), mean treatment (22%), trolling (21%), receiving unwanted sexts (21%) and online harassment (17%). Half of respondents said they were extremely or very worried generally about online life, and 62 percent did not know or were unsure where to get help if they encountered a risk online.
As part of the project, Microsoft launched a Digital Civility Challenge to encourage people to commit to helping make the internet a better and safer place through living the golden rule, respecting differences, thinking before responding to things online and standing up for themselves and others. Take the challenge here. Additionally, NCSA spoke with Beauchere when the survey was released to discuss the findings and Microsoft’s efforts in the digital civility space; read the Q&A in NCSA Executive Director Michael Kaiser’s Huffington Post blog column.
Following Beauchere’s remarks, two GW Law students – Noor Hamadeh and Sarabeth Rangiah – highlighted their work in the GW Law Cyber-Violence Project (CVP), which raises awareness about online abuse and offers both legal and nonlegal resources (for free) to victims of cyber-violence in the Washington, D.C., area. The CVP classifies cyber-violence as “any gender-related abuse, harassment, stalking or other intimidation, communicated online or over the internet, that causes or intends to cause fear, humiliation and/or suffering in an individual” and focuses on three primary types cyber-violence: cyber-stalking, cyber-harassment and non-consensual pornography. For more information about the CVP and cyber-violence, visit the CVP webpage.
The second panel, “Addressing the ‘Fake News’ Problem: Combating Online Hoaxes and Misinformation,” focused on online hoaxes and ideologically driven misinformation (like the Pizzagate conspiracy theory) and the tools available for addressing these problems. David Lazer, a political science and computer and information science professor at Northeastern University, began by defining fake news as “misinformation with the trappings of ‘legitimately’ produced news, but without the underlying organizational processes [accuracy].” Lazer outlined a recent study he conducted using real people’s tweets leading up to the 2016 election. The findings showed that most fake news on Twitter was shared by a small number of highly active “super-sharers” and, while the New York Times, Washington Post and CNN were some of the most popular sources for news tweets, fake news was some of the most popular “news” content shared in October 2016.
Matt Hindman, a professor in GW’s School of Media and Public Affairs, discussed the role fake news has played in major world events, including World War II propaganda, and emphasized that the most important shift in making fake news more of a problem now is the decline of the “news habit” among a growing chunk of Americans. “Millions of Americans don’t have regularly scheduled, daily appointments with the news,” said Hindman.
Emma Llansó, director of the Center for Democracy & Technology’s Free Expression Project, closed the discussion by exploring some of the key questions in the fake news problem, such as “who gets to decide what is fake news, and what are the consequences of the person deciding?” and “should news organizations have to pay fines for publishing fake news?” Llansó discussed that the United States’ limited laws and policies on this issue bring more focus to tech companies and what they can do to combat fake news – for example, user empowerment tools can help internet users manage what content they see and flag inappropriate content or misinformation. She also mentioned that algorithms and automated systems could potentially be altered in an effort to combat fake news topics showing up in trending sections and users’ news feeds/timelines.
For more information about the event and other GW Law initiatives on these topics, visit the Global Internet Freedom and Human Rights Project page.