Linked by Thom Holwerda on Thu 10th Nov 2016 23:13 UTC
Internet & Networking

With the US presidential elections right behind us, there's been a lot of talk about the role platforms like Facebook and Twitter have in our modern discourse. Last week, it was revealed that teens in Macedonia earns thousands of dollars each month by posting patently false stories about the elections on Facebook and getting them to go viral. With Facebook being a major source of news for a lot of people, such false stories can certainly impact people's voting behaviour.

In a statement to TechCrunch, Facebook responded to the criticism that the company isn't doing enough to stop this kind of thing. The statement in full reads:

We take misinformation on Facebook very seriously. We value authentic communication, and hear consistently from those who use Facebook that they prefer not to see misinformation. In Newsfeed we use various signals based on community feedback to determine which posts are likely to contain inaccurate information, and reduce their distribution. In Trending we look at a variety of signals to help make sure the topics being shown are reflective of real-world events, and take additional steps to prevent false or misleading content from appearing. Despite these efforts we understand there's so much more we need to do, and that is why it's important that we keep improving our ability to detect misinformation. We're committed to continuing to work on this issue and improve the experiences on our platform.

This is an incredibly complex issue.

First, Facebook is a private entity, and has no legal obligation to be the arbiter of truth, save for complying with court orders during, say, a defamation or libel lawsuit by a wronged party. If someone posts a false story that Clinton kicked a puppy or that Trump punched a kitten, but none of the parties involved take any steps, Facebook is under no obligation - other than perhaps one of morality - to remove or prevent such stories from being posted.

Second, what, exactly, is truth? While it's easy to say that "the earth is flat" is false and misinformation, how do you classify stories of rape and sexual assault allegations levelled at a president-elect - and everything in between? What if you shove your false stories in a book, build a fancy building, slap a tax exempt status on it, and call it a religion? There's countless "legitimate" ways in which people sell lies and nonsense to literally billions of people, and we deem that completely normal and acceptable. Where do you draw the line, and more importantly, who draws that line?

Third, how, exactly, do we propose handling these kinds of bans? Spreading news stories online is incredibly easy, and I doubt even Facebook itself could truly 'stop' a story from spreading on its platform. Is Facebook supposed to pass every post and comment through its own Department of Truth?

Fourth, isn't spreading information - even false information - a basic human need that you can't suppress? Each and every one of us spreads misinformation at one or more points in our lives - we gossip, we think we saw something, we misinterpreted someone's actions, you name it. Sure, platforms like Facebook can potentially amplify said misinformation uncontrollably, but do we really want to put a blanket moratorium on "misinformation", seeing as how difficult it it is to define the term?

We are only now coming to grips with the realities of social media elections, but as a politics nerd, I'd be remiss if I didn't raise my hand and reminded you of an eerily similar situation the US political world found itself in in the aftermath of the 26 September, 1960 debate between sitting vice president Nixon and a relatively unknown senator from Massachusetts, John F. Kennedy.

It was the first televised debate in US history. While people who listened to the debate on the radio declared Nixon the winner, people who watched the debate on television declared Kennedy the winner. While Nixon appeared sickly and sweaty, Kennedy looked fresh, calm, and confident. The visual impact was massive, and it changed the course of the elections. Televised debates are completely normal now, and every presidential candidate needs to be prepared for them - but up until 1960, it wasn't a factor at all.

Social media will be no different. Four years from now, when Tulsi Gabbard heads the Democratic ticket (you heard it here first - mark my words) versus incumbent Trump, both candidates will have a far better grasp on social media and how to use them than Clinton and Trump did this year.

Thread beginning with comment 636882
To read all comments associated with this story, please click here.
Electoral Law needs to catch up
by Adurbe on Fri 11th Nov 2016 10:48 UTC
Member since:

In the UK, on the day of polling it is ILLEGAL to campaign. This is why you can't have people canvasing you at the polling station, nor can you give out flyers which would influence your voting (one way or the other).

Facebook and Twitter appear to be exempt from this. And this is where I think the danger lies in my opinion.

Take for example the brexit vote, here in the UK. Both sides were actively and vocally campaigning on social media. Is this influence acceptable?

Should the same rules apply to a candidate's or supporters social media as it does if they do it in the "real world"? Or is social media an expression of personal views only so doesn't apply?

Reply Score: 2

stormcrow Member since:

The UK isn't analogous in culture to the US, especially when it comes to elections. We aren't even on the same page when it comes to legal protections to speech. In fact, our Bill of Rights was created expressly because the English protections for the concepts enshrined are so weak.

In the US what you're suggesting is not only unconstitutional, but legally trying to prevent political discussions and proselytizing on voting day would cause demonstrations in the streets and we'd do it anyway.

Facebook/Twitter aren't "exempt" because there's no such law to say they can't, and never will be one that stands up to Constitutional scrutiny in the US.

That said, there's plenty of laws to protect the sanctity of the voting precinct already. In general, where intimidation has occurred it's on isolated local levels rather than a widespread national problem. It's also very difficult to prove. The precincts themselves are generally considered non-public governmental offices where the government has a vested interest in the smooth functioning of the office and is not a public forum. Much like entering a court house here in the US, only limited freedoms of speech apply in such places.

Generally speaking, we go in, fill in our ballot (or make our selections on a touch screen depending on what the state uses) and give it to the ballot box. Then we leave. Most of the talking that goes on inside the precinct are the poll workers giving instructions on how to do things or showing people that showed at the wrong precinct where their proper one is. I've never yet had a political discussion in or near a polling place. That's not to say it doesn't happen, it's just that it's generally not a problem. Most of us respect the right of our fellow citizens to vote their conscience regardless of how acrimonious the rhetoric gets.

Reply Parent Score: 2