米兰体育

Skip to content
NOWCAST 6PM WEEKDAY NEWSCAST
Watch on Demand
Advertisement

Did social media actually counter election misinformation?

Did social media actually counter election misinformation?
YOU DOING THIS. 鈾櫔 SOLEDAD: I'M SOLEDAD O'BRIEN. WELCOME TO 鈥淢ATTER OF FACT.鈥� DO MISINFORMATION AND DISINFORMATION. THESE WORDS ARE OFTEN USED INTERCHANGEABLY. MISINFORMATION IS "FALSE INFORMATION THAT IS SPREAD, REGARDLESS OF INTENT TO MISLEAD." THAT HAPPENS EVERYDAY IN OUR LIVES WE MISHEAR SOMETHING OR MISREMEMBER DETAILS THEN SHARE THEM WITH SOMEONE. BUT DISINFORMATION IS DELIBERATELY MISLEADING OR BIASED INFORMATION MANIPULATED NARRATIVE OR FACTS PROPAGANDA. SO INTENT IS THE DIFFERENCE BETWEEN THE TWO. WITH SOCIAL MEDIA FUELING THE SPREAD OF DISINFORMATION, HOW CAN VOTERS KNOW FACT FROM FICTION? ONE STEP IS THROUGH FACT-CHECKING WHICH HAS ALSO BEEN THE TARGET OF DISINFORMATION CAMPAIGNS. SO WE SET OUT TO GET THE FACTS ABOUT FACT CHECKERS. >> I'M ANGIE HOLAN. I'M EDITOR IN CHIEF OF POLITIFACT WE'RE FACT CHECKERS WHO ARE FOLLOWING THE 2020 CAMPAIGN VERY CLOSELY WE'RE INDEPENDENT, NON PARTISAN JOURNALISTS, SO OUR AGENDA IS TO GIVE PEOPLE INFORMATION THAT THEY CAN TRUST. >> ASK YOURSELF IS THIS SOMETHING THAT IS LIKELY TO COME UP AT THE DEBATE. WE LOOK AT WHAT THE CANDIDATES HAVE SAID, WE LOOK AT WHICH THINGS ARE VIRAL ON LINE AND WE TRY TO MAKE A DECISION ABOUT WHAT PEOPLE REALLY NEED TO KNOW ABOUT TODAY. WE ASK THE SPEAKER FOR THEIR EVIDENCE, WE LOOK IN FACT-CHECKING ARCHIVES, WE TALK TO EXPERTS, WE BRING DOCUMENTS AND THEN AT THE END OF THAT PROCESS, WE DECIDE ON A RATING. THE RATINGS ARE TRUE, MOSTLY TRUE, HALFWAY TRUE, MOSTLY FALSE, FALSE, AND PANTS ON FIRE. EVERY RATING HAS A DEFINITION AND THREE EDITORS COME TOGETHER TO VOTE ON EACH RATING. WE FACT CHECK A LOT OF CLAIMS OFF OF SOCIAL MEDIA, SO HOAXES CONSPIRACY THEORIES, PRANKS THAT CONFUSE PEOPLE. WE RECENTLY FACT CHECKED WHETHER THE SIMPSON TV SHOW PREDICTED THE CORONAVIRUS, AND WE FOUND NO IT DID NOT, IT WAS PANTS ON FIRE. SOMEONE HAD TAKEN AN IMAGE FROM THE SIMPSONS AND WROTE CORONAVIRUS INTO THE SCENE. WHEN PEOPLE SEE THINGS WRONG ON THE INTERNET, THEIR BEST MOVE IS TO JUST OPEN ANOTHER TAMPA BAY AND LOOK. GOOGLE WHAT THEY JUST FOUND AND SEE IF THERE IS A FACT CHECK ON IT. THERE'S THIS OLD SAYING THAT OUR FAVORITE MESSENGER CAN BE WRONG, AND OUR LEAST FAVORITE MESSENGER CAN BE RIGHT. PEOPLE REALLY NEED TO USE THEIR CRITICAL THINKING AND LO
Advertisement
Did social media actually counter election misinformation?
Video above 鈥� Getting the Facts Straight: Can You Tell Difference Between What鈥檚 Real and What鈥檚 Not?Ahead of the election, Facebook, Twitter and YouTube promised to clamp down on election misinformation, including unsubstantiated charges of fraud and premature declarations of victory by candidates. And they mostly did just that 鈥� though not without a few hiccups.But overall their measures still didn't really address the problems exposed by the 2020 U.S. presidential contest, critics of the social platforms contend.鈥淲e鈥檙e seeing exactly what we expected, which is not enough, especially in the case of Facebook,鈥� said Shannon McGregor, an assistant professor of journalism and media at the University of North Carolina.One big test emerged early Wednesday morning as vote-counting continued in battleground states including Wisconsin, Michigan and Pennsylvania. President Donald Trump made a White House appearance before cheering supporters, declaring he would challenge the poll results. He also posted misleading statements about the election on Facebook and Twitter, following months of signaling his unfounded doubts about expanded mail-in voting and his desire for final election results when polls closed on Nov. 3.So what did tech companies do about it? For the most part, what they said they would, which primarily meant labeling false or misleading election posts in order to point users to reliable information. In Twitter's case, that sometimes meant obscuring the offending posts, forcing readers to click through warnings to see them. For Facebook and YouTube, it mostly meant attaching authoritative information to election-related posts.For instance, Google-owned YouTube showed video of Trump鈥檚 White House remarks suggesting fraud and premature victories, just as some traditional news channels did. But Google placed an 鈥渋nformation panel鈥� beneath the videos noting that election results may not be final and linking to Google鈥檚 election results page with additional information. 鈥淭hey鈥檙e just appending this little label to the president鈥檚 posts, but they鈥檙e appending those to any politician talking about the election," said McGregor, who blamed both the tech giants and traditional media outlets for shirking their responsibility to curb the spread of misinformation about the election results instead of amplifying a falsehood just because the president said it.鈥淎llowing any false claim to spread can lead more people to accept it once it鈥檚 there," she said.Trump wasn't alone in attracting such labels. Republican U.S. Sen. Thom Tillis got a label on Twitter for declaring a premature reelection victory in North Carolina. The same thing happened to a Democratic official claiming that former Vice President Joe Biden had won Wisconsin.The flurry of Trump claims that began early Wednesday morning continued after the sun rose over Washington. By late morning, Trump was tweeting an unfounded complaint that his early lead in some states seemed to 鈥渕agically disappear鈥� as the night went on and more ballots were counted.Twitter quickly slapped that with a warning that said 鈥渟ome or all of the content shared in this Tweet is disputed and might be misleading about an election or other civic process.鈥� It was among at least three such warnings Twitter applied to Trump tweets Wednesday, which make it harder for viewers to see the posts without first reading the warning; it did the same on a post from another individual that Trump sought to amplify.Much of the slowdown in the tabulation of results had been widely forecasted for months, because the coronavirus pandemic led many states to make it easier to vote by mail, and millions chose to do so rather than venturing out to cast ballots in person. Mail ballots can take longer to process than ballots cast at polling places.In a Sept. 3 post, Facebook CEO Mark Zuckerberg said that if a candidate or campaign tries to declare victory before the results are in, the social network would label their post to note that official results are not yet in and directing people to the official results.But Facebook apparently limited that policy to official candidates and campaigns. Posts by others that declared premature overall victory in specific states were not flagged.Twitter was a bit more proactive. Based on its 鈥� civic integrity policy,鈥� implemented last month, Twitter said it would label and reduce the visibility of Tweets containing 鈥渇alse or misleading information about civic processes鈥� in order to provide more context. It labeled Trump鈥檚 tweets declaring premature victory as well as claims from Trump and others about premature victory in specific states.The Twitter and Facebook actions were a step in the right direction, but not that effective 鈥� particularly in Twitter鈥檚 case, said Jennifer Grygiel, a professor at Syracuse University and social media expert.That鈥檚 because tweets from major figures can get almost instant traction, Grygiel said. So even though Twitter labeled Trump鈥檚 tweets about 鈥渂eing up big,鈥� and votes being cast after polls closed and others, by the time the label appeared, several minutes after the tweet, the misinformation had already spread. One Wednesday Trump tweet falsely complaining that vote counters were 鈥渨orking hard鈥� to make his lead in the Pennsylvania count 鈥渄isappear鈥� wasn't labeled for more than 15 minutes, and was not obscured.鈥淭witter can鈥檛 really enforce policies if they don鈥檛 do it before it happens, in the case of the president,鈥� Grygiel said. 鈥淲hen a tweet hits the wire, essentially, it goes public. It already brings this full force of impact of market reaction.鈥滸rygiel suggested that for prominent figures like Trump, Twitter could pre-moderate posts by delaying publication until a human moderator can decide whether it needs a label. That means flagged tweets would publish with a label, making it more difficult to spread unlabeled misinformation, especially during important events like the election.This is less of an issue on Facebook or YouTube, where people are less likely to interact with posts in real time. YouTube could become more of an issue over the next few days, Grygiel suggested, if Trump鈥檚 false claims are adopted by YouTubers who are analyzing the election.鈥淕enerally, platforms have policies in place that are an attempt to do something, but at the end of the day it proved to be pretty ineffective,鈥� Grygiel said. 鈥淭he president felt empowered to make claims.鈥�

Video above 鈥� Getting the Facts Straight: Can You Tell Difference Between What鈥檚 Real and What鈥檚 Not?

Ahead of the election, Facebook, Twitter and YouTube promised to clamp down on election misinformation, including unsubstantiated charges of fraud and premature declarations of victory by candidates. And they mostly did just that 鈥� though not without a few hiccups.

Advertisement

But overall their measures still didn't really address the problems exposed by the 2020 U.S. presidential contest, critics of the social platforms contend.

鈥淲e鈥檙e seeing exactly what we expected, which is not enough, especially in the case of Facebook,鈥� said Shannon McGregor, an assistant professor of journalism and media at the University of North Carolina.

[related id='dabab813-7888-4ac9-b2ff-58cb41015e99' align='center'][/related]

One big test emerged early Wednesday morning as vote-counting continued in battleground states including Wisconsin, Michigan and Pennsylvania. President Donald Trump made a White House appearance before cheering supporters, declaring he would challenge the poll results. He also posted misleading statements about the election on Facebook and Twitter, following months of signaling his unfounded doubts about expanded mail-in voting and his desire for final election results when polls closed on Nov. 3.

So what did tech companies do about it? For the most part, what they said they would, which primarily meant labeling false or misleading election posts in order to point users to reliable information. In Twitter's case, that sometimes meant obscuring the offending posts, forcing readers to click through warnings to see them. For Facebook and YouTube, it mostly meant attaching authoritative information to election-related posts.

[related id='b2ae06b2-f719-4e65-9e21-5c92c7e0bc70' align='center'][/related]

For instance, Google-owned YouTube showed video of Trump鈥檚 White House remarks suggesting fraud and premature victories, just as some traditional news channels did. But Google placed an 鈥渋nformation panel鈥� beneath the videos noting that election results may not be final and linking to Google鈥檚 election results page with additional information.

鈥淭hey鈥檙e just appending this little label to the president鈥檚 posts, but they鈥檙e appending those to any politician talking about the election," said McGregor, who blamed both the tech giants and traditional media outlets for shirking their responsibility to curb the spread of misinformation about the election results instead of amplifying a falsehood just because the president said it.

鈥淎llowing any false claim to spread can lead more people to accept it once it鈥檚 there," she said.

[related id='432fb29b-658f-4d9c-99ed-fa533e655001' align='center'][/related]

Trump wasn't alone in attracting such labels. Republican U.S. Sen. Thom Tillis got a label on Twitter for declaring a premature reelection victory in North Carolina. The same thing happened to a Democratic official claiming that former Vice President Joe Biden had won Wisconsin.

The flurry of Trump claims that began early Wednesday morning continued after the sun rose over Washington. By late morning, Trump was tweeting an unfounded complaint that his early lead in some states seemed to 鈥渕agically disappear鈥� as the night went on and more ballots were counted.

Twitter quickly slapped that with a warning that said 鈥渟ome or all of the content shared in this Tweet is disputed and might be misleading about an election or other civic process.鈥� It was among at least three such warnings Twitter applied to Trump tweets Wednesday, which make it harder for viewers to see the posts without first reading the warning; it did the same on a post from another individual that Trump sought to amplify.

Much of the slowdown in the tabulation of results had been widely forecasted for months, because the coronavirus pandemic led many states to make it easier to vote by mail, and millions chose to do so rather than venturing out to cast ballots in person. Mail ballots can take longer to process than ballots cast at polling places.

In a Sept. 3 post, Facebook CEO Mark Zuckerberg said that if a candidate or campaign tries to declare victory before the results are in, the social network would label their post to note that official results are not yet in and directing people to the official results.

But Facebook apparently limited that policy to official candidates and campaigns. Posts by others that declared premature overall victory in specific states were not flagged.

Twitter was a bit more proactive. Based on its 鈥� civic integrity policy,鈥� implemented last month, Twitter said it would label and reduce the visibility of Tweets containing 鈥渇alse or misleading information about civic processes鈥� in order to provide more context. It labeled Trump鈥檚 tweets declaring premature victory as well as claims from Trump and others about premature victory in specific states.

The Twitter and Facebook actions were a step in the right direction, but not that effective 鈥� particularly in Twitter鈥檚 case, said Jennifer Grygiel, a professor at Syracuse University and social media expert.

That鈥檚 because tweets from major figures can get almost instant traction, Grygiel said. So even though Twitter labeled Trump鈥檚 tweets about 鈥渂eing up big,鈥� and votes being cast after polls closed and others, by the time the label appeared, several minutes after the tweet, the misinformation had already spread. One Wednesday Trump tweet falsely complaining that vote counters were 鈥渨orking hard鈥� to make his lead in the Pennsylvania count 鈥渄isappear鈥� wasn't labeled for more than 15 minutes, and was not obscured.

鈥淭witter can鈥檛 really enforce policies if they don鈥檛 do it before it happens, in the case of the president,鈥� Grygiel said. 鈥淲hen a tweet hits the wire, essentially, it goes public. It already brings this full force of impact of market reaction.鈥�

Grygiel suggested that for prominent figures like Trump, Twitter could pre-moderate posts by delaying publication until a human moderator can decide whether it needs a label. That means flagged tweets would publish with a label, making it more difficult to spread unlabeled misinformation, especially during important events like the election.

This is less of an issue on Facebook or YouTube, where people are less likely to interact with posts in real time. YouTube could become more of an issue over the next few days, Grygiel suggested, if Trump鈥檚 false claims are adopted by YouTubers who are analyzing the election.

鈥淕enerally, platforms have policies in place that are an attempt to do something, but at the end of the day it proved to be pretty ineffective,鈥� Grygiel said. 鈥淭he president felt empowered to make claims.鈥�