The news is that after 15 years the IMDb is closing down its message boards, but the story is their creation in the first place: a tale of Apache, mod_perl, PostgreSQL, C, and XEMacs, all served up on a BeOS bun in a Bristol-area cafeteria; of missed deadlines, missed opportunities and misplaced innocence given the scale of comments, comment spam and trolling up to that point. Brought to you by Colin M. Strickland, a developer whose CV has long read "you can blame me for the message boards" (and yes, he does go by the initials cms).
Business owners in the town of Buea, the capital of the Southwest Region of Cameroon say they are struggling to operate following an internet shutdown that began on January 17. Internet users here say that they can no longer communicate or access information, particularly on social media. Many internet cafes, micro finance institutions and money transfer agencies have had to shutdown.
"When things like this happen and they just ban the internet which is the source of my livelihood. I just feel like maybe I made the wrong decision. Maybe I should just leave the country like my friends and never return again. And I personally feel bad that that would be unpatriotic on my part but you know, we have to do what we have to do sometimes. And now I don’t even know if the Internet will be returned. I don't know when it will be returned," said IT entrepreneur, Churchill Mambe.
It's remarkable how important the internet has become, especially in developing countries.
The trouble with being a former typesetter is that every day online is a new adventure in torture. Take the shape of quotation marks. These humble symbols are a dagger in my eye when a straight, or typewriter-style, pair appears in the midst of what is often otherwise typographic beauty. It's a small, infuriating difference: "this" versus â€œthis.â€
I'll stop replacing curly quotes with straight quotes on OSNews the day the tech industry gives me back my Dutch quotation marks (â€žLike soâ€, he said) and adds multilingual support to Google Now and Siri and so on (which right now require a full wipe to change languages, making them useless for hundreds of millions of people who live bilingual lives).
Yes, I can be petty.
Last year I created an account on Twitter to create a targeted feed for my hobby content and tweets for like-minded retro-gaming folk, separate from my personal account. On this hobby account I mainly follow retro-gaming and Commodore fans. When you use Twitter in a very targeted way like this, it actually can be extremely useful and enjoyable. In any event, during this time I began to see a healthy amount of discussion around BBS'es (Bulletin Board Systems) becoming "a thing" again for retro-computing nerds. And, amazingly, a few popular BBSes were being served off of 8-bit machines.
"8-Bitters" were connecting to them, having virtually "off the grid" discussions and playing games outside the watchful eye of Google and the rest of the internet. I wanted to connect to them, too.
So, yes, in my view, Facebook has a direct responsibility to get rid of fake news, and it cannot simply rely on its audience or others to shoulder the burden. I'm happy to see tools made available to readers that help report such trash, and happy that Facebook is working with third-party fact checkers. But the ultimate responsibility is Facebook’s.
Nobody wants Facebook to tinker with legitimate news and opinion - again, except for hate speech. But getting rid of purely fake news from purely fake sources is an eminently achievable task, especially for a well-funded, tech-savvy, huge media company serving nearly 2 billion people.
I've written about my thoughts on this subject before, but I want to make them clearer by presenting you with an example.
Consider this clip from Fox News' Bill O'Reilly.
Everything in this clip is not true. Everything said in that clip about Amsterdam and The Netherlands is literally - literally literally, not the fake kind of literally - made up. It's all lies. Flat-out, bold-faced lies. This is clearly, unapologetically, fake news.
Yet, I doubt people like Mossberg and other people who claim it's easy as pie for Facebook and Twitter to 'block' fake news would agree with me that Facebook should block this kind of news from their sites. Even though it's nothing but flat-out lies, it would not be considered 'fake news'.
And therein lies the problem with this whole outrage over 'fake news'. No matter how many times people say it's easy to separate real news from fake news, there's going to be so many edge cases to trip up generic algorithms, and it's simply not feasible to have human curation on sites as large in volume as Facebook and Twitter.
Is it really Facebook's job to solve for people's stupidity? In my view, it really isn't. On top of that, I somehow doubt the tech media would be as worked up over this as they are now had Clinton won the election - and all of you know my political leanings well enough by now to understand the value of me saying this.
Let me be clear: I am well aware of the problematic aspects of Facebook' s impact; I am particularly worried about the ease with which we sort ourselves into tribes, in part because of the filter bubble effect noted above (that's one of the reasons Why Twitter Must Be Saved). But the solution is not the reimposition of gatekeepers done in by the Internet; whatever fixes this problem must spring from the power of the Internet, and the fact that each of us, if we choose, has access to more information and sources of truth than ever before, and more ways to reach out and understand and persuade those with whom we disagree. Yes, that is more work than demanding Zuckerberg change what people see, but giving up liberty for laziness never works out well in the end.
Absolutely, 100% spot-on.
With the US presidential elections right behind us, there's been a lot of talk about the role platforms like Facebook and Twitter have in our modern discourse. Last week, it was revealed that teens in Macedonia earns thousands of dollars each month by posting patently false stories about the elections on Facebook and getting them to go viral. With Facebook being a major source of news for a lot of people, such false stories can certainly impact people's voting behaviour.
In a statement to TechCrunch, Facebook responded to the criticism that the company isn't doing enough to stop this kind of thing. The statement in full reads:
We take misinformation on Facebook very seriously. We value authentic communication, and hear consistently from those who use Facebook that they prefer not to see misinformation. In Newsfeed we use various signals based on community feedback to determine which posts are likely to contain inaccurate information, and reduce their distribution. In Trending we look at a variety of signals to help make sure the topics being shown are reflective of real-world events, and take additional steps to prevent false or misleading content from appearing. Despite these efforts we understand there's so much more we need to do, and that is why it's important that we keep improving our ability to detect misinformation. We're committed to continuing to work on this issue and improve the experiences on our platform.
This is an incredibly complex issue.
First, Facebook is a private entity, and has no legal obligation to be the arbiter of truth, save for complying with court orders during, say, a defamation or libel lawsuit by a wronged party. If someone posts a false story that Clinton kicked a puppy or that Trump punched a kitten, but none of the parties involved take any steps, Facebook is under no obligation - other than perhaps one of morality - to remove or prevent such stories from being posted.
Second, what, exactly, is truth? While it's easy to say that "the earth is flat" is false and misinformation, how do you classify stories of rape and sexual assault allegations levelled at a president-elect - and everything in between? What if you shove your false stories in a book, build a fancy building, slap a tax exempt status on it, and call it a religion? There's countless "legitimate" ways in which people sell lies and nonsense to literally billions of people, and we deem that completely normal and acceptable. Where do you draw the line, and more importantly, who draws that line?
Third, how, exactly, do we propose handling these kinds of bans? Spreading news stories online is incredibly easy, and I doubt even Facebook itself could truly 'stop' a story from spreading on its platform. Is Facebook supposed to pass every post and comment through its own Department of Truth?
Fourth, isn't spreading information - even false information - a basic human need that you can't suppress? Each and every one of us spreads misinformation at one or more points in our lives - we gossip, we think we saw something, we misinterpreted someone's actions, you name it. Sure, platforms like Facebook can potentially amplify said misinformation uncontrollably, but do we really want to put a blanket moratorium on "misinformation", seeing as how difficult it it is to define the term?
We are only now coming to grips with the realities of social media elections, but as a politics nerd, I'd be remiss if I didn't raise my hand and reminded you of an eerily similar situation the US political world found itself in in the aftermath of the 26 September, 1960 debate between sitting vice president Nixon and a relatively unknown senator from Massachusetts, John F. Kennedy.
It was the first televised debate in US history. While people who listened to the debate on the radio declared Nixon the winner, people who watched the debate on television declared Kennedy the winner. While Nixon appeared sickly and sweaty, Kennedy looked fresh, calm, and confident. The visual impact was massive, and it changed the course of the elections. Televised debates are completely normal now, and every presidential candidate needs to be prepared for them - but up until 1960, it wasn't a factor at all.
Social media will be no different. Four years from now, when Tulsi Gabbard heads the Democratic ticket (you heard it here first - mark my words) versus incumbent Trump, both candidates will have a far better grasp on social media and how to use them than Clinton and Trump did this year.
If you've ever been to mainland China, chances are you're familiar with the Great Firewall, the country's all-encompassing internet censorship apparatus. You know the despair of not being able to open Facebook, the pain of going mute on Twitter. But with a good VPN, you can magic many of these inconveniences away - at least temporarily.
For software developers based in China, however, it's not that simple. You're not just censored from certain websites. Basic building blocks that you use for product development are suddenly beyond your reach. With software services and libraries spread across the globe, China's internet sovereignty can be a real pain in the ass.
Something I've never really put much thought into.
First up, a bit of clarification. By general purpose OS I'm referring to what most people use for server workloads today - be it RHEL or variants like CentOS or Fedora, or Debian and derivatives like Ubuntu. We'll include Arch, the various BSD and opensolaris flavours and Windows too. By end I don't literally mean they go away or stop being useful. My hypothosis is that, slowly to begin with then more quickly, they cease to be the default we reach for when launching new services.
So note that this isn't about desktop workloads, but server workloads.
When it wasÂ revealed last week that police used a social media monitoring program to track protestors, it inspired outrage, and major tech companies immediately cut off API access for the tool. But at least one of those companies had prior opportunity to know what the tool, Geofeedia, was capable of. According to three former Geofeedia employees who spoke with The Verge, Facebook itself used the tool for corporate security. Facebook, according to two of the sources, even used Geofeedia to catch an intruder in Mark Zuckerberg's office.
Social media companies like Facebook are weird - and incredibly pervasive. Someone I know - I'm not going to be too specific here - once proudly said he/she does not want Facebook to know where he/she lives, so he/she did not fill in that field in his/her Facebook account. I smiled internally and thought to myself "Facebook knows you are at a specific address between the hours of 18:00 and 8:30 every workday and during the weekend - I'm pretty sure Facebook knows where you live".
Comfort levels with social media and technology companies usually come down to fooling ourselves.
Yahoo Inc last year secretly built a custom software program to search all of its customers' incoming emails for specific information provided by U.S. intelligence officials, according to people familiar with the matter.
The company complied with a classified U.S. government directive, scanning hundreds of millions of Yahoo Mail accounts at the behest of the National Security Agency or FBI, said two former employees and a third person apprised of the events.
Some surveillance experts said this represents the first case to surface of a U.S. Internet company agreeing to a spy agency's demand by searching all arriving messages, as opposed to examining stored messages or scanning a small number of accounts in real time.
Ars Technica contacted various technology companies to ask them if they were ever subjected to the same FBI demands:
A spokeswoman for Microsoft, Kim Kurseman, e-mailed Ars this statement, and also declined further questions: â€œWe have never engaged in the secret scanning of email traffic like what has been reported today about Yahoo.â€
For its part, Google was the most unequivocal. Spokesman Aaron Stein e-mailed: "We've never received such a request, but if we did, our response would be simple: 'no way.'"
On Saturday, the U.S. government plans to cede control of some of the internet's core systems - namely, the directories that help web browsers and apps know where to find the latest weather, maps and Facebook musings.
The U.S. has been in charge of these systems for more than three decades; plans to transfer control of these functions to a nonprofit oversight organization have been in the works since the late 1990s. Some Republicans in Congress raised late objections over the transfer, which they termed a "giveaway" to the rest of the world. But they failed to block the move in a spending bill to keep the government operating.
Here's a look at the systems in question and what's at stake for internet users.
A number of features or background services communicate with Google servers despite the absence of an associated Google account or compiled-in Google API keys. Furthermore, the normal build process for Chromium involves running Google's own high-level commands that invoke many scripts and utilities, some of which download and use pre-built binaries provided by Google. Even the final build output includes some pre-built binaries. Fortunately, the source code is available for everything.
ungoogled-chromium tries to fix these things.
Already more than a decade old and with roots reaching back half a decade before the World Wide Web itself, the GIF was showing its age. It offered support for a paltry 256 colors. Its animation capabilities were easily rivaled by a flipbook. It was markedly inferior to virtually every file format that had followed it. On top of that, there were the threats of litigation from parent companies and patent-holders which had been looming over GIF users for five long years before the fiery call to action. By Burn All GIFs Day, the GIF was wobbling on the precipice of destruction. Those who knew enough to care deeply about file formats and the future of the web were marching on the gates, armed with PNGs of torches and pitchforks.
And yet, somehow, here we are. Seventeen years later, the GIF not only isn't dead. It rules the web.
Sometimes, things just work - even if it sucks.
In the years that followed, the future seemed obvious. The number of Gopher users expanded at orders of magnitude more than the World Wide Web. Gopher developers held gatherings around the country, called GopherCons, and issued a Gopher T-shirt - worn by MTV veejay Adam Curry when he announced the network's Gopher site. The White House revealed its Gopher site on Good Morning America. In the race to rule the internet, one observer noted, "Gopher seems to have won out."
Well, things turned out a little differently. Sadly, we tend to only remember the victors, not the ones lying in a ditch by the side of the road to victory.
Fast forward to July 15, 2016 (there’s that lab journal againâ€¦) when, after receiving an email from Google asking me to indicate how exactly I would like them to use my data to customise adverts around the web, and after thinking for a bit about what kind of machine learning tricks I would be able to pull on you with 12 years of your email, I decided that I really had to make alternative plans for my little email empire.
Somehow FastMail came up and in one of those impulsive LET'S WASTE SOME TIME manoeuvres, I pressed the big red MIGRATE button!
The rest of this post is my mini-review of the FastMail service after almost 3 weeks of intensive use.
I'm pretty sure at least some of you are contemplating a similar migration, away from companies like Google, Microsoft, and Apple, to something else.
Ars Technica talks about dark patterns:
Everyone has been there. So in 2010, London-based UX designer Harry Brignull decided he'd document it. Brignull’s website, darkpatterns.org, offers plenty of examples of deliberately confusing or deceptive user interfaces. These dark patterns trick unsuspecting users into a gamut of actions: setting up recurring payments, purchasing items surreptitiously added to a shopping cart, or spamming all contacts through prechecked forms on Facebook games.
I can't recall ever falling for a dark pattern, but I see these things everywhere - a sure sign that whatever company, website, or whatever, you're dealing with is not worthy of your time.
Twitter has banned one of its most notoriously contentious voices. On Tuesday evening, the microblogging service permanently suspended the account of , a day after he incited his followers to bombard Ghostbusters star Leslie Jones with racist and demeaning tweets.
"People should be able to express diverse opinions and beliefs on Twitter," a company spokesperson said in a statement provided to BuzzFeed News. "But no one deserves to be subjected to targeted abuse online, and our rules prohibit inciting or engaging in the targeted abuse or harassment of others."
With platforms like Twitter and Facebook having become the de-facto space where people come to voice their opinion and a central axis in world events - think the attack in Nice, the failed coup in Turkey, which effectively took place on Twitter and Facebook - a lot of people lose sight of what these platforms really are: glorified, very large and very popular online forums.
There's no difference between that forum you run for the community of frog statuette collectors you're a part of on the one side, and Twitter on the other. If people on your forum post insulting messages, harass your fellow frog statue collectors, or send in waves of trolls to post racist, hateful, and abusive messages at them, you'd ban them, remove their comments, delete their accounts.
Twitter is no different. Twitter, like your frog statuette collector forum, is a private enterprise, a personal space, where you set the rules regarding what's allowed and what isn't. I do the same here on OSNews. Banning people from your forum, from OSNews, or, indeed, from Twitter, is not a freedom of speech issue. The right to free speech protects you from the government, not from Twitter, forum moderators, or me deleting your hateful comment from OSNews. Or, for that matter, from deleting your perfectly valid and well-argumented comment (which I don't do, but you get the point). Platforms like Twitter may have become a popular forum for expression, but it has no more obligation to "protect" the "right to free speech" than you have the obligation to accept people walking into your house and saying hateful comments to you or your loved ones.
Twitter and Facebook face huge problems with systematic abuse from trolls, and banning this particularly nasty troll is nothing more than lip service to a famous actress and comedian, and it does nothing to address the core problem the platform faces. Twitter might consider spending less time screwing over third party developers and creating nonsense nobody wants, and focus on the real problems many of their real users have to face every single day.
In 1992 Tim Berners-Lee created three things, giving birth to what we consider the Internet. The HTTP protocol, HTML, and the URL. His goal was to bring 'Hypertext' to life. Hypertext at its simplest is the ability to create documents which link to one another. At the time it was viewed more as a science fiction panacea, to be complimented by Hypermedia, and any other word you could add 'Hyper' in front of.
There was a fervent belief in 1993 that the URL would die, in favor of the â€˜URN’. The Uniform Resource Name is a permanent reference to a given piece of content which, unlike a URL, will never change or break. Tim Berners-Lee first described the "urgent need" for them as early as 1991.
Facebook Messenger has started rolling out Secret Conversations, a feature that enables end to end encryption for conversations within Messenger. Secret Conversations is built on Signal Protocol, a modern, open source, strong encryption protocol we developed for asynchronous messaging systems.
Signal Protocol powers our own private messaging app, Signal. The protocol is designed from the ground up to make seamless end-to-end encrypted messaging possible and to make private communication simple. To amplify the impact and scope of private communication, we also collaborate with other popular messaging apps like WhatsApp, Google Allo, and now Facebook Messenger to help integrate Signal Protocol into those products.
These are all good steps forward, trail-blazed by - at least among the big companies - Apple.