google.com, pub-1508047357237065, DIRECT, f08c47fec0942fa0 Media 2070 in Conversation with TIME on AI Narrative Harm and Media Reparations
top of page

Media 2070 in Conversation with TIME on AI Narrative Harm and Media Reparations

The feature highlights Media 2070’s leadership in addressing growing concerns around generative AI, narrative harm, and the need for accountability frameworks in media and technology.



By: Christelle Smith & Dayana Melendez, Publicist


Media repair includes technology that doesn't cause harm, according to Media 2070. Recently interviewed by TIME magazine tech correspondent Andrew Chow, Executive Director Anshantia "Tia" Oso explored the many avenues of pain AI is causing across the nation. Essentially – as the expansion of AI continues to outpace platforms’ ability to regulate it, the coverage explores how rapidly evolving generative technologies are creating new forms of narrative harm, particularly affecting Black women, yet little is being done to stop it.


While AI is framed as a tool for progress and efficiency, the TIME feature reveals a more complex reality. Hyper-realistic AI-generated content is reproducing racial stereotypes and misinformation, with synthetic depictions of Black women often appearing in exaggerated, sexualized, or entirely fabricated scenarios. At the same time, Black women are disproportionately impacted by layoffs and misrepresentation.



One of the most alarming trends highlighted in the interview with TIME's Andrew R. Chow is the spread of AI-generated medical disinformation targeting Black audiences directly.

"There are several accounts, especially on TikTok, that will depict what is supposed to be a Black doctor or sometimes a Black nurse or other type of health professional," said Anshantia "Tia" Oso, senior director of Media 2070. "They'll [share] information about health and wellness, telling people that there are certain vaccines that they shouldn't be getting, sometimes selling supplements, even claiming to cure cancer."


[Related] Take the first step in thought leadership and use our workbook to shape your approach:
Get Your Brand in the News: Top-Selling Workbook for Founders & Leaders
$45.00
Buy Now

The fabrication goes even further. AI-generated personas — from fake Black grandmothers sharing folk wisdom to AI soul singers appearing on Spotify charts — are proliferating across platforms, raising urgent questions about digital identity and cultural exploitation.

"That's absolutely a form of digital blackface," Oso said. "It's not lost on me that some of the first artists we're seeing on Spotify charts are Black women that are supposed to be R&B singers and soul singers. There are individuals that want to profit from Black culture and Black cultural influence that are not Black."


These overlapping harms raise urgent questions about who truly benefits from AI expansion and who subsidizes it. The result is a growing need for media institutions to determine how they will respond.


Headshot of Media 2070 Senior Director, Anshantia Oso

“Who will take accountability?” said Media 2070 Senior Director Anshantia Oso. “AI is not simply reflecting bias. It is manufacturing it at the expense of Black women.”


Oso also pointed to a critical and often overlooked gap: the technology is spreading far faster than people's ability to understand it.


"We have this colloquialism in the English language about seeing is believing — now seeing literally isn't believing," she said. "And media literacy is not a part of most public school curriculums."

The interview also revealed a striking double standard in how major tech platforms operate globally. While companies like Instagram and TikTok claim they cannot regulate AI-generated content in the United States, in countries like Norway those same companies comply with strict disclosure requirements.



AI expansion also raises broader questions about who benefits from technological innovation and who bears its costs. Data centers powering AI systems are frequently located near Black communities, contributing to pollution and environmental strain, while synthetic content exploits Black likeness without consent.


The placement is particularly timely as TIME has increasingly focused on the cultural and political implications of artificial intelligence. The publication recently recognized AI Agents as its “Person of the Year,” bringing these conversations to a wider mainstream audience. Featuring Media 2070’s perspective helps introduce Media Reparations language into that conversation.


TIME magazine release of "Person of the Year" 2026

The placement emerged from strategic outreach by MMM, as the team adapted a proposed op-ed into a reporter pitch focused on AI and narrative harm. By connecting Media Reparations to ongoing reporting on generative AI and its societal impact, the approach positioned Media 2070 as a thought leader offering standards and accountability frameworks as AI reshapes journalism and public perception.


Looking Forward


As generative AI continues reshaping media systems, conversations about media ethics, narrative equity, and structural reform are becoming increasingly urgent. Media 2070’s work continues to push for structural solutions that ensure media systems serve communities rather than harm them.



"We're not just against the bad things," Oso said. "What are the ways that AI could actually be used to improve people's lives? We're not seeing that right now."


For Media 2070, this moment is exactly why the work exists — because by 2070, the story of how we responded to this technology will already have been written. The goal is to make sure Black communities had a hand in writing it.


[Related] Ready to take the first step in your PR journey? Let's audit your current communication strategy and come up with a new plan for success. Start here:

Discovery Call
$100.00
30min
Book Now

bottom of page