Cyber-bullying isn’t just for kids.
Twitter CEO Dick Costolo recently came out admitting his company has failed to adequately address abuse.
“We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years,” he wrote in an internal memo. “It’s no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day.”
In a later note, Costolo continued: “We HAVE to be able to tell each other the truth, and the truth that everybody in the world knows is that we have not effectively dealt with this problem even remotely to the degree we should have by now, and that’s on me and nobody else. So now we’re going to fix it, and I’m going to take full responsibility for making sure that the people working night and day on this have the resources they need to address the issue, that there are clear lines of responsibility and accountability, and that we don’t equivocate in our decisions and choices.”
Twitter was also in trouble in late 2014, when the company reported its staff had received multiple death threats from ISIS.
It is commendable that Twitter is taking tremendous corporate responsibility. To deal with online abuse, the company has partnered with a non-profit to specifically research harassment directed towards women, and is also working on its reporting processes.
But these policies invite important questions about censorship. There can be a fine and blurry line between debate and hate speech. Furthermore, the American public must also ask whether we want companies, or the government, moderating online spaces of speech.
Federal and state laws have attempted to curb cyber-bullying, but many statutes only address minors, and are hard to enforce.
Companies managing their own websites certainly have more power to regulate online harassment, but are also more vulnerable to making arbitrary, and perhaps dangerous, decisions.