Who owns the AI tools journalists use? A new study exposes a dangerous transparency gap

“To maintain the integrity of journalism in the age of AI, it is essential to understand who has a stake in these companies,” says author Sydney Martin
The offices of Scale AI, a government-funded AI cluster based in Montreal, Canada. REUTERS/Evan Buhler

The offices of Scale AI, a government-funded AI cluster based in Montreal, Canada. REUTERS/Evan Buhler

13th December 2024

More than two-thirds of 100 companies behind the AI tools most commonly used and recommended by journalists are insufficiently transparent about their ownership, finances and other critical data, according to a new report published by the Media and Journalism Research Center

The study explores the private ownership of AI tools regularly used by journalists for fact-checking, content generation and news-gathering and the influence that ownership may have on how they work. This piece of research suggests that full financial transparency was particularly scarce: just 24 of the 100 companies covered shared information about their revenues and only 43 made available their total funding. 

“In order to maintain the integrity of journalism in the age of AI, it is essential to understand who has a stake in these AI tool companies and how AI is being used by the media. This will ensure the protection of consumers, democracy, and truth,” says the report’s executive summary.

The report, authored by Sydney Martin and edited by Marius Dragomir, studied 100 AI tools used by newsrooms around the world, from Dataminr and Notion to Grammarly and Jaspar AI. 

Martin evaluated each against 12 primary and 5 secondary criteria to better understand ownership and how transparent these companies are about investors, finances, location and more. This included the location of the company’s headquarters, the total funding acquired and information about the lead investor or investors. The company’s official website and company databases CrunchBase and PrivCo were used to source what data is shared publicly.

The numbers in context

I spoke recently to Martin about the rationale behind the report. She said she was interested in the transparency gap perpetuated by news media using AI: “When looking at AI use in the media, I found it interesting that a lot of news outlets acknowledge AI and that they’re using it, but don’t tell us what exactly they are using.”

According to the study, just 33% of the AI tool companies assessed show sufficient transparency, fulfilling 9 of the 12 criteria established by the report. Where tools lack data on finances, ownership and other basic information, the influence of stakeholders or investors on the development of that tool is difficult to ascertain, the report says.

Sydney Martin
Sydney Martin, author of the report. 

Reflecting on the research, Martin said there were examples of companies, typically larger ones such as Claude, that did, encouragingly, pay a lot of attention to transparency. Several had transparency policies and privacy sections clearly visible as part of their websites’ navigation.

“Part of me wonders if it’s a performance to meet social expectations or if they are actually worried about [transparency],” said Martin. “I wouldn’t say that it’s completely out of their minds, but I also wouldn’t say that they are doing it with the best intentions.” 

The study features a case study on Get Clarity, whose website shares details of a democracy advisory board which is not currently listed on CrunchBase or PrivCo. Get Clarity was rated as not transparent by the study. The CEO of this deep-fake detection tool previously served with a branch of the Israeli military and the study looks into how these ownership ties might affect its use for verifying content from the Israel-Hamas conflict.

Most tools studied – 74 out of 100 – shared an identifiable location for their owner’s operations. Of these, 47 are headquartered in North America, including 43 in the US. Only 19 are based in Europe, 5 in Asia and 3 in the Middle East. According to the report, from which a full database of the companies is available, just three of the European companies were considered to be adequately transparent.

How AI can obscure the Global South

Martin’s study supports previous reporting on a  Global North-South AI divide, as the overwhelming majority of owners were headquartered in Europe or the US. Just five of the companies with identifiable locations were located in the Global South, with none identified as based out of Latin America. If more of the data used to build and train these AI tools comes from Western sources, this has clear implications for the Global South journalists using these tools, Martin said.

“The way in which AI tools work is often unsuitable culturally or in developmental terms to the Global South and it can obscure some of the truth of the experiences in the Global South,”  she said.

While the study suggests that AI tools for article writing, editing, translation, and summarisation are most popular with journalists, using those tools to report on the Global South may be inadvisable, says Martin: “The tool is not necessarily going to search for sources that have been buried under the more popular westernised or Northern sources.” 

Does ownership influence AI models?

Most companies featured in the research could be classed as small businesses or start-ups. The study suggests some interesting patterns in terms of investment and ownership, with just 48 of the tool owners being publicly-listed investors. Tech-focused investment firm Y Combinator was listed as an investor for eight of these companies and Google invests in four.

Understanding who funds the development of AI tools used by journalists and newsrooms is important: if humans can introduce biases to AI tools during the development process, any potential influence from the person who pays those programmers and developers should be questioned too, said Martin.

“People who make the algorithms can control these AI tools,” she said. “But the people who pay those coders, to a certain extent, also control the truth and control the public’s perception of reality,” a point especially pertinent when using AI for fact-checking and research.

The centre suggests this study is the first step in a larger initiative “aimed at fostering transparency and decision-making regarding the use of AI tools in media and journalism.” It also recommends a longitudinal study of the 100 companies identified to better understand growth, investment and ownership trends in the AI industry. 

“I’d like AI companies to be willing – without having to be asked ideally – to be transparent,” said Martin. “Tell us where your headquarters are, who your investors are, who’s accountable. Making that information available is important and very simple.”

Newsrooms could benefit from an internal culture of transparency with regards to AI, suggests Martin. For example, being clear about where the AI they use comes from and how it was developed. For journalists, she urges the application of “human judgment” before and during AI use: “Ask where the information is coming from and whether it is going to influence your journalism. What are the potential biases the AI could proliferate?”

Join our free newsletter on the future of journalism

In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block

Join our free newsletter on the future of journalism

In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block