Tag Archives: Fake News

Fake News, Aziz Ansari & The Vote

In the run up to the 2016 presidential election, a tweet featuring popular Indian American actor Aziz Ansari urged voters to cast their vote from home. The photoshopped image showed Ansari, star of Master of None and Parks & Recreation, holding a sign that said “Save Time. Avoid the Line. Vote from Home.”

It’s illegal to vote from home or online in an US election, but that of course, did not deter Russian hackers behind the ad who used Twitter and Facebook to spread misinformation about the 2016 election.

Did some people tweet in their vote? Twitter did not say. Not even when a Congressional committee eventually began an investigation into foreign interference in the 2016 presidential election, as fake news surged unchecked on social media platforms, Twitter and Facebook included.

Four years later we are in another contentious election cycle. And the fake news machinery rolls on, brazenly manipulating a divided electorate with tales that range from the silly to more serious.

In a post that went viral, President Trump recently retweeted a link entitled “Twitter Shuts Down Entire Network To Slow Spread Of Negative Biden News”, from the news site Babylon Bee, that openly admits to running “Fake News you can trust” – the tagline on its Twitter page.  Sometime the truth isn’t obvious even when it stares you in the face!

Absurd news stories from the conservative Babylon Bee and its left-leaning counterpart The Onion, often get significant clicks and shares with their satirical takes on current events. But they sit outside the fringes of ‘countermedia’ outlets which produce stories that are much more insidious and dangerous to democracy.

What’s different with the current crop of fake news protagonists, is they’re not just distant, foreign ‘troll factories’ igniting discontent among voters in the US. A University of Colorado study of Facebook and Twitter users in America reports that people at ideological extremes in this country are likely to make misleading stories go mainstream via social media.

Fake news instigators are unleashing a wave of misleading ads and false news to sow unrest among voters.  But’s what’s more concerning is that bad actors are weaponizing social media, with much more dangerous consequences.

Axios reported that at least 11 Congressional nominees have expressed support for QAnon, a conspiracy theory cult which has propagated bizarre stories through its Redditt and other social media accounts, like the one about the coronavirus being created by the ‘deep state’, and the notorious ‘Pizzagate,’ which ended with an armed vigilante storming a neighborhood pizzeria.

This election season, purveyors of fake news are adopting devious tactics to spread misinformation and disinformation to interfere with the election, intimidate voters and suppress the vote.

Speakers at an October 16 Ethnic Media Services briefing shared their perspectives on the intent behind messaging that’s being fabricated to confuse and disenfranchise voters.

Cameron Hickey, Jacqueline Mason and Jacobo Licona

“It doesn’t have to be false to be a problem,’ said Cameron Hickey, Program Director of Algorithmic Transparency at the National Conference on Citizenship (NCOC). In fact, fear mongering in conspiracy theories is designed to make recipients scared, angry or self-righteous and provoke changes in behavior, like the aforementioned gunman in the ‘Pizzagate’ incident.

With regard to the upcoming election, said Hickey, the most ‘concerning’ thing is talk of an impending ‘civil war’ that is appearing in messaging from both sides of the political spectrum. Warnings to voters about being prepared for armed conflict in the event of election results that don’t result in their favor, are “seeding the ground for potential violence,” warned Hickey.

Information about mail-in and absentee ballots, or when and where and how people can vote are embedded in messaging  that may be (intentionally or unintentionally) misleading. A classic example of this said Hickey, is the one which says, “Republicans can vote on Tuesdays and Democrats vote on Wednesdays.”

Jacqueline Mason, senior investigative researcher at First Draft, shared a picture of Kamala Harris, the Democratic VP nominee, that went viral on social media. The photoshopped image showed Harris against images of black men she had allegedly imprisoned beyond their release dates, though upon closer inspection, the background appears to be composed of repeated images of the same six men.

What does this discordance say about our culture with its reliance on digital echo chambers and crumbling trust in mainstream media and government?

“We are no longer having conversations about the issues or the identities of the politicians running for office but exaggerating narrow bands of their perspective and amplifying them in ways that distort reality,” said Hickey.

Not only is it becoming harder to distinguish between what’s true and what isn’t, in the false narratives being pedaled on social media, but it appears that civil discourse, along with a responsibility to the truth, is also slipping away from our collective grasp.


Meera Kymal is a contributing editor at India Currents

 

Ro Khanna, Big Tech & the 2020 Elections

Congressman Ro Khanna participated in a telebriefing on “The Role of Silicon Valley in the 2020 Elections” on Tuesday, November 12, and answered questions from diverse ethnic media reporters on topics ranging from technology’s role on the 2020 elections and privacy issues, to the gig economy.  

Vandana Kumar, Publisher, India Currents, moderated a Q&A session that gave the congressman an opportunity to share his perspectives as a key lawmaker representing the Silicon Valley. 

Ro Khanna (California’s 17th district), sits on the House Armed Services, Budget, Oversight and Reform Committees, and is the first Vice-Chair of the Congressional Progressive Caucus.

He talked at length about the role of giant tech companies and the fight against fake news. Khanna argued that social media companies have a major responsibility to be vigilant and voluntarily police their platforms to prevent hate speech, viral false ads, and election interference; blatant false speech or disregard for truth is not protected by the first amendment, Khanna said.

Khanna admitted he was concerned by Mark Zuckerberg’s views on fake news, but stressed that the “Facebooks of the world” aren’t the gatekeepers of blatantly false speech; that role belongs to an independent regulatory agency. Rather than an outright ban, a thoughtful regulatory framework to establish reasonable standards that require political ads to remove falsity, would better protect first-amendment traditions, he said.

Khanna is working with Congressman Kevin McCarthy on a bill that will allow social media companies to monitor and remove “bad actors” from election interference. 

Though he hopes that these bills will be passed before Election 2020, Khanna claimed that the hostile tone of political discourse and cable news should share the blame for false news. With the upcoming elections, Congress is concerned about security on social media platforms, he said, and tech companies need to do the right thing to avoid a repeat of 2016.

The congressman commented that healthcare is another issue getting attention in Congress, which is trying to lower the cost of prescription drugs, preserve the Affordable Care Act, and lower premiums.

Conhgressman Ro Khanna

Khanna who is co-chair of Bernie Sanders‘s 2020 presidential campaign, described the Medicare for All bill he is co-sponsoring with Congresswoman Pramila Jayapal (Washington’s 7th congressional district).The bill will give states the flexibility to use federal funding for Medicare and Medicaid when implementing the single payer system and include a caveat requiring states to get to 100% coverage in five years. A tax on corporations will pay for the bill, said Khanna, who proposes to cover any shortfall with supplemental federal matching funds.

On the role of big tech protections for privacy and consumer data, Khanna referred to his proposed Internet Bill of Rights that requires an individual’s consent before their data is collected or transferred, and the right to know how it’s used. Reforms can protect data from being manipulated against their interests and protect privacy, Khanna pointed out, but what’s really needed is well-crafted regulation that catches up with the pace of technological change.

As the Supreme Court determines the fate of DACA recipients, Khanna expressed his opposition to end DACA; he thinks Congress should act to offer protections to dreamers. He also is supportive of AB 5, California’s effort to regulate the gig economy. Gig economy workers should be treated as employees, and get the same benefits and rights, because with universal healthcare, contends Khanna, people won’t rely on their jobs for medical care.

Khanna agreed that affordable housing remains a challenge, though he acknowledged “constructive” private sector funding from Apple and Google towards affordable housing. He emphasized that low income housing needs additional federal investment and affordable building tax credits to expand.  Khanna stressed that what would make a difference are more temporary shelters and services for the homeless, and intervention programs to help with rent and mortgage payments, as exemplified by a successful pilot program in Santa Clara.

The telebriefing, sponsored by India Currents in partnership with Ethnic Media Services, was part of the ‘Conversations with Candidates’ series initiated by India Currents to expand ethnic media news access to elected officials and presidential candidates. The event  was attended by reporters from Silicon Valley Innovation Channel – DingDingTV, EPA Today, Phillipinenews, Chinese News, The American Bazaar, California Black Media and India West. 

Meera Kymal is a contributing editor to India Currents

Force Facebook to Shut Down WhatsApp

Defects in the design of Facebook’s WhatsApp platform may have led to as many as two dozen people losing their lives in India. With its communications encrypted end-to-end, there is no way for anyone to moderate posts; so WhatsApp has become “an unfiltered platform for fake news and religious hatred,” according to a Washington Post report.

WhatsApp is not used as broadly in the U.S. as in countries such as India, where it has become the dominant mode of mobile communication. But imagine Facebook or Twitter without any filters or moderation — the Wild Wild West they were becoming during the heyday of Cambridge Analytica. Now imagine millions of people who have never been online before becoming dependent on and trusting everything they read there. That gives you a sense of what kind of damage the messaging platform can do in India and other countries.

Earlier this month, India’s Ministry of Electronics and Information Technology sent out a stern warning to WhatsApp, asking it to immediately stop the spread of “irresponsible and explosive messages filled with rumours and provocation.” The Ministry said the platform “cannot evade accountability and responsibility specially when good technological inventions are abused by some miscreants who resort to provocative messages which lead to spread of violence.”

WhatsApp’s response, according to The Wire, was to offer minor enhancements, public education campaigns, and “a new project to work with leading academic experts in India to learn more about the spread of misinformation, which will help inform additional product improvements going forward.” The platform defended its need to encrypt messages and argued that “many people (nearly 25 percent in India) are not in a group” — in other words, only 75 percent of the population is affected!

One of the minor enhancements WhatsApp offered was to put the word “Forwarded” at the top of such messages. But this gives no information about the source of the original message, and even highly educated users could be misled into thinking a source is credible when it isn’t.

WhatsApp owner Facebook is using the same tactics it used when the United Nations found it had played “a determining role” in the genocide against Rohingya refugees in Myanmar: pleading ignorance, offering sympathy and small concessions, and claiming it was unable to do anything about it.

Here is the real issue: Facebook’s business model relies on people’s dependence on its platforms for practically all of their communications and news consumption, setting itself up as their most important provider of factual information — yet it takes no responsibility for the accuracy of that information.

Facebook’s marketing strategy begins with creating an addiction to its platform using a technique that former Google ethicist Tristan Harris has been highlighting: intermittent variable rewards. Casinos use this technique to keep us pouring money into slot machines; Facebook and WhatsApp use it to keep us checking news feeds and messages.

When Facebook added news feeds to its social-media platform, its intentions were to become a primary source of information. It began by curating news stories to suit our interests and presenting them in a feed that we would see on occasion. Then it required us to go through this newsfeed in order to get to anything else. Once it had us trained to accept this, Facebook started monetizing the newsfeed by selling targeted ads to anyone who would buy them.

It was bad enough that, after its acquisition by Facebook, WhatsApp began providing the parent company with all kinds of information about its users so that Facebook could track and target them. But in order to make WhatsApp as addictive as Facebook’s social-media platform, Facebook added chat and news features to it — something it was not designed to accommodate. WhatsApp started off as a private, secure messaging platform; it wasn’t designed to be a news source or a public forum.

WhatsApp’s group-messaging feature is particularly problematic because users can remain anonymous, identified only by a mobile number. A motivated user can create or join unlimited numbers of groups and share hate-filled messages and fake news. What’s worse is that message encryption prevents law-enforcement officials and even WhatsApp itself from viewing what is being said. No consideration was given in the design of the product to the supervision and moderation necessary in public forums.

Facebook needs to be held liable for the deaths that WhatsApp has already caused and be required to take its product off the market until its design flaws are fixed. It isn’t making its defective products available only to sophisticated users who know what they have signed up for; it is targeting people who are first-time technology users, ignorant about the ways of the tech world.

Only by facing penalties and being forced to do a product recall will Facebook be motivated to correct WhatsApp’s defects. The technology industry always finds a way of solving problems when profits are at stake.

Vivek Wadhwa is a Distinguished Fellow at Harvard Law School and Carnegie Mellon’s School of Engineering at Silicon Valley. This piece is partly derived from his new book, “Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain — and How to Fight Back”. This has been reprinted with his permission.