Tag Archives: hate speech

Should Social Media Censor Hate Speech In A Free Society?

Twitter’s censuring, if not censoring, President Trump’s controversial tweet threatening to use force to quell riots protesting the death of George Floyd, and  Facebook, refusing to follow the lead of its rival social network,  has reignited the controversy  leading many Facebook employees to stage a walk out and some to even quit their coveted jobs in protest.

But can social media companies censor hate speech while also providing an unbiased platform for free speech that they claim to provide?

Some conservatives argue social networking companies support free speech only when the speech aligns with the political views of the company.

Richard Hania, found that of the 22 notable accounts suspended by Twitter, 21 accounts had supported President Trump and only one of those accounts had supported Hillary Clinton in the 2016 elections.

Candace Owens, a journalist, retweeted the racist tweets of Sarah Jeong, an editor at theNew York Times,  but substituted the word “white” for “black” and “Jews”. Owens had her account suspended, while Jeong wasn’t even reprimanded, suggesting that different social groups have different standards for hate speech.

Facebook CEO, Mark Zuckerberg, at a meeting with lawmakers admitted that his company’s censoring a video of Live Action, a pro life advocacy group, was biased, but argued that there was no widespread bias in moderating content.  Jack Dorsey, the Twitter CEO also argued that although the company’s employees are very left leaning, it has no influence on content moderation.

A couple of studies, including an internal audit conducted by Facebook, concur with the CEOs and have found no signs of systemic bias against conservatives.

 Whether or not hate speech censorship is biased,  it would be imprudent to be oblivious that the subjectivity of what constitutes hate speech leaves open the possibility of viewpoint discrimination and arbitrary censorship.

If a group claiming itself to be  a religious cult engages in organized,  indisputably repugnant behavior like child abuse, should the group be more  protected from criticism — as criticism of religion is typically considered hate speech —  than any other group which engages in a similar behavior but has no religious affiliation?

Did  Erika Christakis, a lecturer at Yale University who was forced to resign for speaking out against censoring Halloween costumes cross the line cross the line between free speech and hate speech?

I don’t condone the harms of hate speech.   Hate speech has no place in a civilized society, and social media companies are certainly noble in their intentions to provide every netizen a dignified cyberlife.

It is imperative that we reflect as a society on the causes of hate speech and how to address its root cause.

But attempting to censor hate speech is a slippery slope that could eventually make social media forums that have come to be hotspots of free speech and debate, into echo chambers fueled by the hegemony of popular views.

Ashwin Murthy is a software engineer at LinkedIn and a and a software engineer at a social networking company.


Image Credit: John Tyson, Unsplash

Photo by Bermix Studio on Unsplash

Ro Khanna, Big Tech & the 2020 Elections

Congressman Ro Khanna participated in a telebriefing on “The Role of Silicon Valley in the 2020 Elections” on Tuesday, November 12, and answered questions from diverse ethnic media reporters on topics ranging from technology’s role on the 2020 elections and privacy issues, to the gig economy.  

Vandana Kumar, Publisher, India Currents, moderated a Q&A session that gave the congressman an opportunity to share his perspectives as a key lawmaker representing the Silicon Valley. 

Ro Khanna (California’s 17th district), sits on the House Armed Services, Budget, Oversight and Reform Committees, and is the first Vice-Chair of the Congressional Progressive Caucus.

He talked at length about the role of giant tech companies and the fight against fake news. Khanna argued that social media companies have a major responsibility to be vigilant and voluntarily police their platforms to prevent hate speech, viral false ads, and election interference; blatant false speech or disregard for truth is not protected by the first amendment, Khanna said.

Khanna admitted he was concerned by Mark Zuckerberg’s views on fake news, but stressed that the “Facebooks of the world” aren’t the gatekeepers of blatantly false speech; that role belongs to an independent regulatory agency. Rather than an outright ban, a thoughtful regulatory framework to establish reasonable standards that require political ads to remove falsity, would better protect first-amendment traditions, he said.

Khanna is working with Congressman Kevin McCarthy on a bill that will allow social media companies to monitor and remove “bad actors” from election interference. 

Though he hopes that these bills will be passed before Election 2020, Khanna claimed that the hostile tone of political discourse and cable news should share the blame for false news. With the upcoming elections, Congress is concerned about security on social media platforms, he said, and tech companies need to do the right thing to avoid a repeat of 2016.

The congressman commented that healthcare is another issue getting attention in Congress, which is trying to lower the cost of prescription drugs, preserve the Affordable Care Act, and lower premiums.

Conhgressman Ro Khanna

Khanna who is co-chair of Bernie Sanders‘s 2020 presidential campaign, described the Medicare for All bill he is co-sponsoring with Congresswoman Pramila Jayapal (Washington’s 7th congressional district).The bill will give states the flexibility to use federal funding for Medicare and Medicaid when implementing the single payer system and include a caveat requiring states to get to 100% coverage in five years. A tax on corporations will pay for the bill, said Khanna, who proposes to cover any shortfall with supplemental federal matching funds.

On the role of big tech protections for privacy and consumer data, Khanna referred to his proposed Internet Bill of Rights that requires an individual’s consent before their data is collected or transferred, and the right to know how it’s used. Reforms can protect data from being manipulated against their interests and protect privacy, Khanna pointed out, but what’s really needed is well-crafted regulation that catches up with the pace of technological change.

As the Supreme Court determines the fate of DACA recipients, Khanna expressed his opposition to end DACA; he thinks Congress should act to offer protections to dreamers. He also is supportive of AB 5, California’s effort to regulate the gig economy. Gig economy workers should be treated as employees, and get the same benefits and rights, because with universal healthcare, contends Khanna, people won’t rely on their jobs for medical care.

Khanna agreed that affordable housing remains a challenge, though he acknowledged “constructive” private sector funding from Apple and Google towards affordable housing. He emphasized that low income housing needs additional federal investment and affordable building tax credits to expand.  Khanna stressed that what would make a difference are more temporary shelters and services for the homeless, and intervention programs to help with rent and mortgage payments, as exemplified by a successful pilot program in Santa Clara.

The telebriefing, sponsored by India Currents in partnership with Ethnic Media Services, was part of the ‘Conversations with Candidates’ series initiated by India Currents to expand ethnic media news access to elected officials and presidential candidates. The event  was attended by reporters from Silicon Valley Innovation Channel – DingDingTV, EPA Today, Phillipinenews, Chinese News, The American Bazaar, California Black Media and India West. 

Meera Kymal is a contributing editor to India Currents

Facebook and WhatsApp aren’t just flawed — they’re downright dangerous

Facebook’s woes are spreading globally, first from the U.S., then to Europe and now in Asia.

A landmark study by researchers at the University of Warwick in the U.K. has conclusively established that Facebook has been fanning the flames of hatred in Germany. The study found that the rich and the poor, the educated and the uneducated, and those living in large cities and those in small towns were alike susceptible to online hate speech on refugees and its incitement to violence, with incidence of hate crimes relating directly to per-capita Facebook use.

And during Germany-wide Facebook outages, which resulted from programming or server problems at Facebook, anti-refugee hate crimes practically vanished — within weeks.

As The New York Times explains, Facebook’s  algorithms reshape a user’s reality: “These are built around a core mission: promote content that will maximize user engagement. Posts that tap into negative, primal emotions like anger or fear, studies have found, perform best and so proliferate.”

Facebook started out as a benign open social-media platform to bring friends and family together. Increasingly obsessed with making money, and unhindered by regulation or control, it began selling to anybody who would pay for its advertising access to its users. It focused on gathering all of the data it could about them and keeping them hooked to its platform. More sensational Facebook posts attracted more views, a win-win for Facebook and its hatemongers.

India

In countries such as India, WhatsApp is the dominant form of communication. And sadly, it is causing even greater carnage than Facebook is in Germany; there have already been dozens of deaths.

WhatsApp was created to send text messages between mobile phones. Voice calling, group chat, and end-to-end encryption were features that were bolted on to its platform much later. Facebook acquired WhatsApp in 2014 and started making it as addictive as its web platform — and capturing data from it.

The problem is that WhatsApp was never designed to be a social-media platform. It doesn’t allow even the most basic independent monitoring. For this reason, it has become an uncontrolled platform for spreading fake news and hate speech. It also poses serious privacy concerns due to its roots as a text-messaging tool: users’ primary identification being a mobile number, people are susceptible everywhere and at all times to anonymous harassment by other chat-group members.

On Facebook, when you see a posting, you can, with a click, learn about the person who posted it and judge whether the source is credible. With no more than a phone number and possibly a name, there is no way to know the source or intent of a message. Moreover, anyone can contact users and use special tools to track them. Imagine the dangers to children who happen to post messages in WhatsApp groups, where it isn’t apparent who the other members are; or the risks to people being targeted by hate groups.

Facebook faced a severe backlash when it was revealed that it was seeking banking information to boost user engagement in the U.S. In India, it is taking a different tack, adding mobile-payment features to WhatsApp. This will dramatically increase the dangers. All those with whom a user has ever transacted can harass them, because they have their mobile number. People will be tracked in new ways.

Facebook is a flawed product, but its flaws pale in comparison to WhatsApp’s. If these were cars, Facebook would be the one without safety belts — and WhatsApp the one without brakes.

That is why India’s technology minister, Ravi Shankar Prasad, was right to demand, this week, that WhatsApp “find solutions to these challenges which are downright criminal and violation of Indian laws.” The demands he made, however, don’t go far enough.

Prasad asked WhatsApp to operate in India under an Indian corporate entity; to store Indian data in India; to appoint a grievance officer; and to trace the origins of fake messages. The problems with WhatsApp, though, are more fundamental. You can’t have public meeting spaces without any safety and security measures for unsuspecting citizens. WhatsApp’s group-chat feature needs to be disabled until it is completely redesigned with safety and security in mind. This on its own could halt the carnage that is happening across the country.

Lesson from Germany

India — and the rest of the world — also need to take a page from Germany, which last year approved a law against online hate speech, with fines of up to 50 million euros for platforms such as Facebook that fail to delete “criminal” content. The E.U. is considering taking this one step further and requiring content flagged by law enforcement to be removed within an hour.

The issue of where data are being stored may be a red herring. The problem with Facebook isn’t the location of its data storage; it is, rather, the uses the company makes of the data. Facebook requires its users to grant it “a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content” they post to the site. It assumes the right to use family photos and videos — and financial transactions — for marketing purposes and to resell them to anybody.

Every country needs to have laws that explicitly grant their citizens ownership of their own data. Then, if a company wants to use their data, it must tell them what is being collected and how it is being used, and seek permission to use it in exchange for a licensing fee.

The problems arising through faceless corporate pillage are soluble only through enforcement of respect for individual rights and legal answerability.

Vivek Wadhwa is a Distinguished Fellow at Harvard Law School and Carnegie Mellon’s School of Engineering at Silicon Valley. He is the author, with Alex Salkever, of “Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain — and How to Fight Back.” Follow him on Twitter @wadhwa.

This article was published with permission from the author.

Hate Crime or Hate Speech: Report Here to Be Heard

Hate Crime or Hate Speech: Report Here to Be Heard

India Currents is committed to tackling the uneven documentation of hate crimes and hate speech by partnering with New America Media., a leader in promoting the work of ethnic media organizations.
Since the 2016 election there has been an alarming increase in reports of hate incidents around the country. Reports range from vandalism and hate-fueled graffiti to physical attacks and shootings.

The reports come amid heightened fear and anxiety within immigrant and minority communities, fueled by the rhetoric of the campaign, and by the statements and policies from the current administration. Even as the fear exists, it is vital to know the course of action when such incidents occur to you or a loved one.

The Documenting Hate Project is spearheaded by the not-for-profit news outlet ProPublica, which has created the form below to allow witnesses or victims to come forward and report their experience. Reports will be verified before entering a national database that will be made available, with privacy restrictions, to newsrooms and civil rights organizations across the country. The form is not a report to law enforcement or any government agency.