The Ghost of AI
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This single-sentence Armageddon-style statement was released last week by the non-profit Center for AI Safety AI. It was signed by the who’s who of AI, including Open AI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and AI’s “godfather” Geoffrey Hinton, who recently quit Google to warn the world about the clear-and-present danger of an unmitigated AI run.
From the time Microsoft- funded Open AI unleashed on us its Large Language Model (LLM) chatbot ChatGPT, late last year, clarion calls for both the adoption and the regulation of AI models have gained unprecedented momentum. While AI startups become the new gold rush after crypto’s spectacular collapse, the creators of AI themselves are crying to be disciplined.
At a recent Senate hearing on AI, industry leaders, including Open AI’s Altman (gung ho and wary at the same time), called on the government to act immediately and set up a regulatory body to create guardrails along the lines of the EU’s AI law, or require nutrition labels with full disclosure of datasets used to train AI. Our failure to check the ills of social media left lasting scars, and the government is determined to keep up this time around. Federal Trade Commission chair Lina Khan has warned of AI turbocharging scams and frauds.
Artificial intelligence seems to be the juggernaut we cannot stop (if we don’t build it every other country will); it needs to be put on a leash.
The Good vs Evil of AI
Our love-hate relationship with generative AI swings between sheer enthusiasm and mass extinction: we will either solve nuclear fusion and climate change or be eliminated by machines. AI is expected to revolutionize education (Khan Academy’s Sal Khan is a fan), increase productivity and unleash creativity exponentially. It will also threaten our economies and security with huge job losses, plagiarism, deep fakes, misinformation, bias, and manipulation.
In short, artificial intelligence will touch every critical aspect of our lives — the economy, healthcare, the arts, marketing, education, and journalism, to name a few.
“I see this as a kind of tsunami that is coming for a large chunk of the US economy. And we are on the beach at the forefront. And we will be the first to be swept away by this thing,” said Michael Nuñez, editorial director at San Francisco-based tech news website, VentureBeat. He was speaking at a animated panel on AI in journalism, organized by the San Francisco Press Club.

Nuñez is an AI aficionado and is already deploying AI in his newsroom. While the onus of fact-checking the news will always lay on journalists, he expects much of the discomfort (should the reader know you are using AI?) with using AI for news to disappear in under a year. “By the end of this year, I think generative AI will touch every part of the process of news from ideation, to headline generation to story editing,” he predicted. “I think we’re only a year or two away from the first masterpiece that is created using artificial intelligence that could be Pulitzer Prize-winning report, that could be a piece of music, it will likely be bold, I think that this stuff is extremely powerful for creators.”
Also on the panel was Chris Matyszczyk, advertising expert and ZDNet columnist, who is no fan of the rapid progression of AI. He told me that AI chatbots benefit no one but the technology companies like Google and Microsoft that build them. “If writing AI prompts is your talent, then you might as well give up,” he said. He also warned that the fallible human mind is prone to believe that the machine is a human being sharing facts.
Matyszczyk doesn’t see a clear way to combat the AI genie. “I think it’s gone too far”.
Cautious optimism in newsrooms
Most news organizations and their reporters, however, are testing AI waters for efficiencies – like data analysis and standard-format writing — straddling the fine line between outright adoption and outright denial. Even though newsroom layoffs attributed to the adoption of AI have started making headlines, journalists on the panel emphasized the need for more on-the-ground reporting to battle the barrage of AI-generated news.
“We in community media straddle multiple roles in the debates over AI,” S. Mitra Kalita, journalist, and CEO and Co-Founder of ethnic media network, URL Media, said in an email to India Currents. She was not on the AI and journalism panel and was responding to a question about the effects of AI on ethnic media. “On the one hand, we need to make sure our newsrooms are not left behind and can transition to new tools, efficiencies, and ways of working. On the other, we must serve and center our communities, many of whom have already been displaced due to automation and could stand to lose again,” she said.
AI mirrors our bias
Ingrained bias and discrimination was a recurring theme at the Ethnic Media Services briefing on AI on May 5.
Chris Dede, Senior Research Fellow at Harvard Graduate School of Education, and Associate Director of Research for the National AI Institute for Adult Learning and Online Education, said many of the causes for AI bias, such as poor datasets or biased algorithms, can be fixed with work, but the root cause lies elsewhere. “At the end of the day, AI is like a parrot. But it’s also like a mirror. It’s a mirror that we hold up to the Internet, and it reflects back what it sees about our society. And a biased society will always produce a biased image in the mirror. So we have to not only fix AI, we have to fix ourselves,” he said.
Experts on the EMS panel consistently reminded their audience that AI is only a machine running on algorithms, “just software, a program.”
Sean McGregor, founder of the AI Incident Database, a non-profit documenting incidents of harm caused by AI, said that the reach of AI is extensive, affecting societies that are not developing this technology, making the system more prone to dangerous biases. “We’re making these systems that are very brittle… And this is where we desperately need to… look across languages, cultures, and geographies to collaborate and ensure that AI benefits all. We’re all out there experiencing the effects of AI systems and without bringing that experience back into one place we’re in trouble,” he said.
What humans can and AI can’t
Arguing for an overhaul in our education system (“There is no workplace anywhere in the world…where people earn a living by factoring”), Chris Dede highlighted the need to focus on the “human side of the equation” and upskill to do what AI cannot do. “I’m hearing people say now I don’t need to learn any other language other than my primary language because we’ve got, you know, machine translation…Nothing could be further from the truth,” Dede said.
“A language is not just a different set of sounds and symbols for expressing the same thing. Languages are ways of thinking. And if you know two languages, you have two ways of thinking, two different perspectives, perspectives that are shaped by the context and the culture that created the language. And if you’re translating, you are analyzing something and using both languages in the process, you have a very valuable additional perspective,” he said.
So how does the common person approach AI?
Doubt Everything
Hector Palacios, AI research scientist at ServiceNow Research, said at the EMS briefing. “Doubt everything you read or see, doubt that it is coming from a human or somebody’s trying to get in your head.”
It’s Not Magic
“These (AI) are engineered systems produced by people,” said Sean McGregor of AI Incident Database. “Peel back the mystery, play with the technology, understand its boundaries of performance. And don’t fail to pay attention. The world is going to be changing very quickly.”
Do Better Than AI
As convincing as they may sound, AI chatbots do not comprehend what they write. “Don’t think that an AI actually understands what it’s telling you in the way that even the most ignorant human being would understand what they’re telling you,” said Harvard research fellow, Chris Dede. However, he warned, “Do understand when you enter the workplace, that if you can’t do things better than an AI can do them, you’re not going to have a job.”
Humans are resilient
On a more optimistic note, when asked after the journalism panel if aspiring reporters should rethink how they prepare for an AI-influenced career, AFP technology correspondent, Julie Jammot, said, “I don’t think journalism students need to change how they learn about the profession fundamentally. Journalists will still have to use their skills to get out there, investigate and report. That won’t change.” “I derive hope from the story of Socrates and how he thought writing would weaken our memory. It did, but we survived,” said Jammot.