By Molly Callahan, Republished from News@Northeastern
But how does fake news spread? One common theory posits that fake news spreads from person to person on social media, which prevents reputable media organizations from vetting the accuracy of the information before it’s made public.
Now, for the first time, researchers at Northeastern University will be able to test the theory.
Four Northeastern professors are among the first group of researchers who will be given access to closely guarded Facebook data—data that could reveal key information about the way people share fake news on social media.
They’ll have access to three datasets from Facebook. The first dataset will include information from public accounts on Facebook and Instagram (a social media platform owned by Facebook) that will enable the researchers to track the popularity of news items across the two platforms. The second set will include data on political advertisements that ran on Facebookin the U.S., U.K., Brazil, India, Ukraine, Israel, and the EU. And the third set will include information about specific URLs that have been shared by at least 100 unique Facebook users.
The researchers will use this Facebook data to build a map that will trace back fake news posts to their origins.
“It’s very exciting,” says Nick Beauchamp, an assistant professor of political science at Northeastern, who’s leading the project for the university. “Facebook is the 800-pound gorilla, and the opportunity to work with their data in a way that’s ethical and secure is an exciting one.”
Beauchamp is working with an interdisciplinary group of researchers in the fields of political science, economics, and computer science—including Northeastern faculty members David Lazer, Donghee Jo, and Lu Wang, as well as Kenneth Joseph, an assistant professor at State University of New York at Buffalo.
They’re trying to figure out how fake news ends up in the news feeds of Facebook users, because, they say, the answer will provide important insight into the fake news phenomenon.
There are generally two ways it happens, Beauchamp says: Either media companies publish fake news stories on their own Facebook accounts, which are displayed in front of users, or users share fake news stories with their online social circles.
The researchers might discover that fake, misleading, or ideologically extreme news from established media companies is being pushed onto people’s social media feeds—either by media companies themselves or via Facebook’s algorithm—which would suggest that these companies are at the root of the problem, Beauchamp says.
Or they could find that fake news spreads when friends share it on social media, which would provide compelling evidence that reputable media organizations are no longer at the helm of what constitutes news, Beauchamp says.
“We know there’s a problem with fake news,” Beauchamp says. “What we don’t know is whether it’s a problem of the institutions and the moment in which we’re living, or if it’s a problem that evolved from peer-sharing.”
In 2018, Facebook made a major change to its news feed, shifting the focus to prioritize posts from friends rather than media companies. Users now see more of their friends’ posts, and fewer posts from news publishers and businesses.
This means that Beauchamp and his colleagues can analyze how much fake news was being shared before and after Facebook prioritized posts by friends. If fake news spread just as widely after Facebook emphasized posts from friends, it would mean that it’s people, not algorithms, causing the glut of misinformation.