At India Currents, we’re exploring how tools like ChatGPT, Gemini, and other GenAI platforms can support our storytelling, workflow, and community connection. We believe AI is a powerful tool that small, nonprofit media outlets like ours can leverage to maximize our impact.
We recognize that much is unclear about how AI will continue to shape our industry, and our AI usage policies will have to adapt as we learn more about the benefits and consequences of AI workflows in news.
We intend to maintain a high ethical standard by ensuring we use AI technology responsibly and transparently, as we learn to adapt to this technology. Our editorial content is our strength; we offer a unique blend of original reporting and articles contributed by our diverse community members.
This policy helps us use AI responsibly, creatively, and transparently—without losing sight of what makes our work human.
Our Guiding Principles:
Everything we publish will live up to our standards of verification.
Humans First
AI is here to help, not replace. Every piece of AI-generated content is reviewed, edited, and approved by a real person.
Transparency with You
If AI plays a big role in shaping something we publish—like an article, caption, or report—we’ll let you know.
Editorial Integrity is Core
AI won’t replace original reporting, community knowledge, or editorial judgment. We use it to make our workflows efficient.
No Hallucinations
AI-generated information is rigorously fact-checked. We do not publish AI outputs that include false information, fabricated quotes, or unverifiable claims.
Respect for Source Material
If we use AI to help summarize or reframe press releases, we never pass them off as original reporting and will always inform you when we do so. When we adapt external material, we aim to add value, context, and clarity—always crediting the original source with proper credit.
Privacy and Security
We take data security and editorial integrity seriously. Whether we’re using AI tools or publishing stories, we’re mindful of protecting sensitive information—our own, our sources’, and our audience’s. Our relationship with readers is built on trust, so we never input confidential data into public AI platforms.
As technology advances and opportunities to customize content for users arise, we will be explicit about how your data is collected, and protect it— in accordance with our organization’s privacy policy — and how it is used to personalize your experience.
Exploration
With the previous principles as our foundation, we embrace exploration and experimentation. We’re investing in newsroom training — internal or external — so every staff member is knowledgeable in generative AI tools.
How We’re Using AI:
We’re already using AI tools in a few practical ways that help offload some tasks so we have more time for impactful journalism:
Across the Team:
We have developed a few automations, where Gen AI is a step in the process for:
- Writing first drafts of press release recaps
- Summarizing in-depth, informational, original pieces into takeaways
- Writing first drafts of social media copy in a consistent voice
Audience service:
We’re also using it as our buddy as we experiment across platforms and find ways to connect better with you. Our work in AI will be guided by what will be useful to you as we serve you with the following:
- To help transcribe interviews
- To analyze reader preferences, predict trends, optimize content strategies, and streamline our fundraising campaigns.
- Experimenting with generating abstract, neutral and tone-sensitive illustrations for our Featured Images for only select categories of stories with the use of a CustomGPT.
- Certain other tools that we use for image generation may have AI workflows integrated into them.
- Brainstorming ideas for Instagram and TikTok
- Utilizing tools like NotebookLM, Gemini, ChatGPT as a research, analysis, and strategy assistant
- Searching and assembling data
- Using tools like Canva, Adobe, Yoast, Grammarly and Everlit to enhance the presentation of original content.
It’s always a starting point, not the final say.
What We Don’t Use AI For
- We do not use AI when it could mislead or cause harm.
- We do not input any sensitive, internal, or unpublished information into public AI platforms—period. Even if a platform claims strong privacy protections, we treat all editorial drafts, source materials, and team communications as off-limits for AI input unless we’ve verified that the tool meets our security standards
- We do not publish content that hasn’t been reviewed by a human. All articles are reviewed by human reporters, writers, and contributors, and are edited and vetted by our editorial team.
- We do not use AI to create fake quotes or mimic cultural identities.
- We do not create images that are insensitive or mislead from the context of the stories.
What We’ve Learned So Far while using these tools:
- AI is a tool, not a shortcut. It helps speed things up but still needs human judgment.
- Context and nuance matter. Especially when it comes to identity, community, and culture.
- It’s okay to experiment. We share our learnings with the team so we all grow together.
- When in doubt, say so. If AI helped shape our final output, we will be upfront about it.
- We should not use Generative AI like Google
- It’s fine to ask a publicly available large language model to research a topic. However, you’ll want to independently verify every fact. So be wary.
- Do not assume its responses are factually correct
- Do not input private and sensitive data: We stick to AI tools and settings that don’t store or reuse our data. If you’re not sure if a tool is safe, ask before using it. And never input confidential info into public tools.
If you’re one of our contributors:
- Include a statement about whether and how AI was involved with your submissions.
- We’re building an internal review rubric for evaluating content that will include questions about transparency.
- We audit your submissions to check for AI-generated errors or patterns of inaccuracy.
- We’re regularly training the team to recognize the signs of AI-generated red flags.
How Often Will We Update This?
We’ll revisit this policy every few months—or sooner if something big changes. Our goal is to keep it practical and evolving with how we actually work.
To our readers:
Would you like to learn more about Gen AI in general? Would you like to express your views on Gen AI or our usage? Would you like to know exactly the tool and processes we use? Let us know: prachi@indiacurrents.com



