Over the past year, tech giants have come under increasing scrutiny for their roles in spreading viral misinformation that might have helped to decide the 2016 presidential election.
Some of them are now testing the use of "trust indicators" to highlight news sources that meet certain quality and reliability standards. Developed over the past three years by news organization representatives working with the nonpartisan Trust Project, these indicators are aimed at providing readers with more transparency about the news outlets, journalists, financial sponsorship, and methods behind the stories they read, hear, or see.
Among the tech companies that have agreed to use such indicators for their content are Bing, Facebook, Google, and Twitter. The decision is the latest sign these companies are starting to recognize the extent of the problem with misinformation, propaganda, and "fake news" online.
'Harder than Ever To Tell What's Accurate'
Sally Lehrman, a former writer and editor at the San Francisco Examiner and journalism instructor at California's Santa Clara University, began talking with news editors in 2014 about the impact that technology was having on the quality of news reporting; her work led to the launch of the Trust Project, now hosted by the university's Markkula Center for Applied Ethics.
"In today's digitized and socially networked world, it's harder than ever to tell what's accurate reporting, advertising, or even misinformation," Lehrman said in yesterday's announcement from the Trust Center. "An increasingly skeptical public wants to know the expertise, enterprise and ethics behind a news story. The Trust Indicators put tools into people's hands, giving them the means to assess whether news comes from a credible source they can depend on."
In addition to agreeing to use the project's trust indicators, Bing, Facebook, Google, and Twitter are looking into other ideas that can better highlight reliable news reporting.
The Trust Project identifies key trust indicators in eight categories: best practices and standards, author expertise, type of work, citations and references, methods, local sourcing, diverse sourcing, and efforts to seek public feedback.
For example, a news article using the trust indicators might provide links or information about the publisher's mission and funding, the journalist's experience, other sources for background information, and reporting processes.
'Important Contextual Information'
Facebook said yesterday that it has started testing a trust indicator module with a small group of publishers, and plans to expand that use over the next few months. The module allows publishers to upload links through the Brand Asset Library with more information about owners, ownership structure, as well as policies on ethics, fact-checking, and corrections. That information will then appear along with the publisher's News Feed articles.
"We believe that helping people access this important contextual information can help them evaluate if articles are from a publisher they trust, and if the story itself is credible," product manager Andrew Anker wrote in a Facebook announcement yesterday. "This step is part of our larger efforts to combat false news and misinformation on Facebook -- providing people with more context to help them make more informed decisions, advance news literacy and education, and working to reinforce indicators of publisher integrity on our platform."
Google said it will employ a similar approach by allowing news publishers to embed information about trust indicators into the HTML code of articles and Web sites.
"When tech platforms like Google crawl the content, we can easily parse out the information (such as Best Practices, Author Info, Citations & References, Type of Work)," search group product manager Jeff Chang wrote yesterday on the Google blog. "This works like the ClaimReview schema tag we use for fact-checking articles. Once we’ve done that, we can analyze the information and present it directly to the user in our various products."
The next step will be to find ways to display such trust indicators alongside articles that appear on Google Search, Google News, and other products, Chang said.
At the end of last month, senior executives from Google, Facebook, and Twitter appeared before the Senate Judiciary Committee in Washington, D.C., to offer testimony and answer questions about suspicious online activities that might have pushed Russia-sponsored misinformation to an estimated 126 million Americans in the lead-up to the 2016 presidential election.
That represented a dramatic shift from late last year when Facebook co-founder and CEO Mark Zuckerberg dismissed as "crazy" the suggestion that his platform had any influence on the election of Donald Trump as president of the United States.
Speaking at the University of Kansas yesterday about his efforts to talk with people across the U.S. since the election, Zuckerberg acknowledged his platform had more influence than he initially recognized.
"I think it's very clear at this point that the Russians tried to use these tools to sow distrust leading up to the 2016 election and afterwards," he said. "What they did is wrong. And it is our responsibility to do everything we can to prevent them or anyone else from doing this again."