On October 22nd, Hemaiah Center for Human Rights had the privilege of attending The
Trust Conference 2024, a global gathering of tech and legal professionals and Human
Rights activists in London. This year’s discussions highlighted the legal and societal
implications of Artificial Intelligence (AI) integration into media, governance, and
business, with a focus on evolving challenges of misinformation, AI regulations, and the
defence of press freedom. For law firms, these issues underscore the need for robust
legal frameworks to protect media freedom and ensure ethical AI adoption. For
Hemaiah Center, the focus on collaborative efforts to protect the rights of journalists and
legal activists was particularly impactful, as it reinforced the Center’s approach to
building partnerships across borders to provide such individuals protection in areas of
conflict, particularly in the Arab world.

Key Sessions and Takeaways
Session 1 – The supreme election year: combating disinformation to defend
democracy
One of the most urgent topics discussed at the conference was the increasing fears
regarding the role of AI in spreading misinformation, particularly in the context of
elections. Countries representing nearly half the world’s population will hold elections in
2024, making it a critical time for global democracy.
The panel discussing this topic featured Jeff Allen, Co-founder and Chief Research
Officer at the Integrity Institute, Ritu Kapur, CEO of Quint Digital Limited, and Claire
Leibowicz, Head of AI and Media Integrity at Partnership on A.I. With a special focus on
India, the US, and the evolving role of AI in misinformation, the panel illuminated both
the challenges and potential solutions in safeguarding democratic processes.
AI in Indian Elections: A Misleading Focus
Ruti Kapur’s insight on AI in Indian elections provided an important reality check. Kapur
stated that although there has been a media frenzy in 2024 about AI’s role in spreading
misinformation, Kapur argued that in India, the real perpetrators are legacy media
outlets and politicians themselves. Kapur blames such entities for being the primary
conveyors of hate and misinformation and that distribution mechanisms — like
WhatsApp, which lacks fact-checking capabilities — pose the greatest risk. This raises
complex legal questions for firms working with governments or media, as AI governance
intersects with issues such as press freedom and data privacy.

Further, Kapur highlighted the Indian government’s increasing exertion of control over
the flow of information, passing laws that allow authorities to take down content deemed
defamatory or false without transparency. Despite efforts by platforms like Meta to
collaborate with fact-checkers, the political climate hinders meaningful progress.
Contrary to fears, AI has been harnessed for positive election uses in India. AI tools
have been used to translate political speeches into multiple regional languages,
reaching a broader audience and creating more inclusive campaigns. Overall, Ruti
maintains that AI-related content during elections has often been perceived as
humorous in India, but serious misuse of disinformation has yet to be observed.
The US Election Landscape: The Role of Integrity Teams and AI’s Growing Influence
In the US, panellists Claire Leibowicz and Jeff Allen painted a contrasting picture of how
AI is becoming a double-edged sword in democratic processes. Allen pointed out that
one of the key shifts since the 2020 election is the diminished role of “integrity teams”
within major tech companies. These teams, which were designed to reduce the spread
of misinformation, are now sidelined as companies deprioritise their efforts due to a lack
of cooperation from government organisations. For example, Meta’s ongoing efforts to
minimise the negative impacts of its products, which began in 2015, have seen a
notable decline.
A disturbing trend Leibowicz pointed out is the rise of AI-generated content, such as
fake celebrity endorsements and manipulated videos. Leibowicz gave the example of a
deepfake video of Global Pop Star Taylor Swift endorsing the Donald Trump campaign,
which in turn forced Swift to publically endorse Kamala Harris due to fears of
association with Trump’s controversial policies. Leibowicz went on to explain the
disadvantages of AI-generated videos as Politicians increasingly use “plausible
deniability” to dismiss legitimate footage as AI-generated, sowing further confusion
among voters. This trend underscores the growing speed and ease of spreading false
information, making it even harder for fact-checkers and platforms to keep pace.
Social Media’s Responsibility: Ethics, Distribution, and User Education

A common thread throughout the panel discussion was the fine line between using AI
for fact-checking and ensuring it does not erode public trust in all information. Leibowicz
and Allen warned against over-reliance on AI labels, which could foster a culture of
extreme scepticism. Leibowicz emphasised that platforms need to go beyond simply
identifying AI-generated content; instead, they should ensure that users are
well-informed about the accuracy of the information they consume. Leibowicz’s
organisation has developed a set of values to guide social media companies in handling
media more ethically, calling for transparency in the origin of information. Meanwhile,
Allen highlighted that platforms should focus on amplifying content from trusted, credible
sources that pass rigorous media literacy checks.
India’s approach to social media regulation sparked further debate. Kapur expressed
concerns about freedom of expression, particularly for journalists who have seen their
content demonetised or removed without explanation due to the arbitrary defamatory
laws, thus calling on social media platforms to provide more protection. Additionally,
Ruti pointed out that India’s community-driven fact-checking efforts through X (formally
known as Twitter) have been met with mixed results, sometimes confusing users, as
politically motivated users can misuse X’s verification processes to seem credible.
Session 2 – Neurorights: Protecting brain data in the age of advanced AI

A brief but pervasive issue discussed was the emerging field of neurorights—the
protection of brain data as AI and EEG technologies advance. Avi Asher-Schapiro, US
Tech Correspondent at the Thomson Reuters Foundation, warned that brainwave data
collected through EEG devices could soon be exploited commercially, posing significant
privacy risks. Thus, legal concerns have been raised for firms advising on data privacy
and AI regulation in emerging industries like neurotechnology.
Asher-Schapiro raised ethical concerns around the use of brainwave data, giving an
alarming example from Dubai, where police have allegedly forced suspects to wear
EEGs while being shown crime scene evidence, potentially infringing on their rights.
Further concerns were raised about corporations’ potential misuse of such data, such as
selling it to third parties or law enforcement without consumers’ consent. For example,
23andMe, which is an ancestry test using consumers’ DNA, has been accused of its
data-sharing practices with health insurance companies for profit without users’ explicit
consent. The Neurorights Foundation is working to create regulatory frameworks to
safeguard brain data privacy before these technologies become more widespread.
However, this process has been hindered by companies like Meta and Apple, which are
lobbying to weaken regulatory definitions to enable their product developments
Session 3 – Safeguarding information: Accessing facts in an AI-driven
world

In an era where AI is rapidly transforming how we access and consume information, the
role of AI in shaping access to information was a focal point of the panel featuring
Graham Brookie, Vice President and Senior Director of DFRLab at the Atlantic Council,
Courtney Radsch, Director of the Center for Journalism and Liberty at the Open Markets
Institute, Ginny Badanes, General Manager of Democracy Forward at Microsoft, and
Rasmus Nielsen, Professor in Communication and Senior Research Associate at the
University of Copenhagen and the Reuters Institute for the Study of Journalism.
Speakers discussed how while technologies such as ChatGPT can provide efficient
summaries and research support, they also risk amplifying biased information or, in
extreme cases, regurgitating state-sponsored propaganda. Thus, ethical standards
must be enforced.
The Rise of AI Chatbots and Access to Information
Badanes noted that AI chatbots like ChatGPT initially struggled to provide real-time data
or recent updates. However, modern chatbots now aggregate information from multiple
sources, thus offering more accurate information and making fact-finding more efficient.
Nielsen presented findings from recent research, noting that while AI chatbots are often
helpful—particularly in contexts like UK election information—there are limitations.
According to Nielsen, about 80% of election-related data provided by AI chatbots is
accurate as they utilise trusted sources such as the Electoral Commission to synthesise
and present data more efficiently than traditional news platforms. While this may seem
high, the remaining margin of error is concerning, particularly when bots are used as
primary news sources. In many cases, news organisations have blocked AI from
accessing their content, affecting the breadth and depth of information chatbots offer.
This growing tension underscores the challenge of ensuring that AI tools deliver
trustworthy information while respecting the rights of news organisations. Nielsen also
warned that increased reliance on AI could exert pressure on the news industry,
pushing organisations to use AI to cut costs, automate article production, and potentially
tailor content to specific audiences in ways that could distort facts, particularly if
AI-generated content becomes widely accepted without rigorous fact-checking

AI’s Impact on News Media and the Spread of Misinformation
While acknowledging the positive use of AI, the panel questioned whether this
convenience raises questions about accuracy and bias. Radsch highlighted a recent
NewsGuard study that found AI chatbots reflect years of information manipulation,
including Russian propaganda. In some cases, chatbots have presented Russian
disinformation as fact nearly 30% of the time on specific topics. This alarming trend
reveals the vulnerability of AI systems to ideological biases and the urgent need for
stricter controls on how AI processes and filters data. Nielsen noted that this threat puts
significant pressure on AI developers and regulators to ensure that these technologies
do not undermine the integrity of journalism or the democratic process.
Brookie echoed this sentiment while emphasising that it is still too early to understand
how AI will influence information consumption fully. Brookie also pointed out that AI
development has been accelerated by competition, referencing Google’s recent
controversial launch of its chatbot despite internal warnings about its readiness. This
rush raises questions about the accuracy and safety of the information AI systems
provide.

Combating AI Bias and Ensuring Transparency
As the conversation turned to regulation, the panellists agreed that governments and
companies need to take proactive steps to create a safe and trustworthy AI ecosystem.
To ensure that disinformation campaigns do not manipulate AI platforms, Badanes
suggested that companies focus on ensuring AI pulls from the most accurate and
up-to-date sources rather than training AI systems to avoid answering specific
questions. However, if the potential harm of providing inaccurate information is too
great, Badanes argued that it might be better for the AI to avoid answering the question
until the technology is safe enough to do so.
Agreeing with earlier speaker Claire Leibowicz, Brookie emphasised the need for
greater transparency. Brookie called for AI platforms to clearly disclose the sources they
use, allowing users to decide whether or not to rely on a given AI model based on its
sources. However, Nielsen cautioned that placing all the responsibility on users is
problematic; instead, companies must ensure that AI systems are designed to prevent
the dissemination of false or biased information in the first place.
Overall, it became clear that while AI offers exciting possibilities for improving access to
information, significant challenges remain. Governments, tech companies, and media
organisations must collaborate to create systems that ensure the accuracy and integrity
of AI-generated information to safeguard the public from misinformation.
Session 4 – The changing information landscape: Producing news in an
AI-driven world
In a world where AI is increasingly shaping the dissemination of information, journalism
faces both opportunities and existential threats. In another session, leading figures injournalism Richard Gingras, Vice President of News at Google; Glenda Gloria,
Executive Editor at Rappler; Jane Barrett, Head of Reuters AI Strategy at Reuters;
Vivian Schiller, Executive Director of Aspen Digital at the Aspen Institute, and Roman
Aleksandrovich Anin, Founder of iStories, explored how AI tools are transforming
journalism.
The Role of AI in Journalism
Speakers from media giants like Reuters, Rappler, and investigative outlets emphasised
the increasing role of AI in newsrooms. Barrett provided a framework for understanding
how AI is being integrated into Reuters journalistic processes, noting that journalists
have largely welcomed AI, as it handles the more tedious aspects of reporting, allowing
them to focus on creativity and deep dives into stories.
Another positive use of AI was given by Roman Anin, a Russian investigative journalist,
who demonstrated how AI has transformed the ability to gather data. Anin’s team used
AI to analyse vast Russian social media posts, identify military personnel, and calculate
the human costs of the war. They also used AI for translation, a cheaper and faster
alternative to human translators, which his organisation discloses to their audience.
Further, Gloria shared how Rappler’s AI-powered chatbot, RAI, only draws from
Rappler’s own database, avoiding external, potentially unreliable sources. Thus,
Rappler allows its users a different form of access to information without risking the
accuracy of the information generated. Rappler also maintains transparency and
journalistic integrity through its published AI guidelines, which focus on responsible use
and ensure that AI will not replace human journalists.Risks and Challenges of AI in News Production
While AI can aid in streamlining journalistic processes, there are concerns about
reliance on AI-generated news. Schiller highlighted the inherent risks of AI, pointing out
that while AI can enhance journalistic endeavours, it is still merely a tool. Gingras, from
Google, cautioned that AI’s impact on content distribution could also inadvertently
amplify fear-based news. This occurs when topics generating the most clicks are
overrepresented, causing unnecessary panic about issues that might not warrant such
heavy coverage. Gingras advocated for a more thoughtful use of AI to prioritise
authentic, reliable sources over those designed merely to generate clicks.
Overall, the panel agrees that AI should be used to increase efficiency rather than
replace journalists.
Session 5 – Under siege: The cost of defending press freedom
The session covered by Carolina Henriquez-Schmitz, Director of TrustLaw at Thomson
Reuters Foundation, was of particular importance to Hemaiah Center as it covers one of
our core aims: protecting freedom of speech and the right to legal representation.
Henriquez-Schmitz’s research presented a concerning trend around the world, revealing
how governments are increasingly targeting lawyers who defend journalists and human
rights activists.Highlighting three main threats, Henriquez-Schmitz illustrated how governments use
legal means to harass and undermine journalists and their defenders.
● Threat 1 – Interference with lawyers’ ability to defend their clients through sizing
computers and confidential case files. Additionally, cyber attacks have
increasingly been used on lawyers to access confidential client information.
● Threat 2 – Arbitrary disbarment and misuse of professional mechanisms to
disable lawyers from practising law, thus causing them financial harm. A
prominent example is Russia’s adoption of the Foreign Agents Law, which is
used to repress lawyers and has been adopted in other oppressive regimes,
such as Hungary.
● Threat 3 – Lawyers have been arrested, charged, and detained for criminal
lawsuits. For example, Ivan Yuryevich Pavlov was imprisoned for three months
and forced into exile from Russia due to his representation of prominent
opposition figure Alexei Navalny’s Anti-Corruption Foundation.
From Yemen to Guatemala, Henriquez-Schmitz’s report illustrated how lawyers
representing journalists face persecution, arbitrary disbarment, and imprisonment,
which are tactics designed to silence dissent and are being adopted by authoritarian
regimes worldwide. Such tactics have worryingly increased in the past five years.
Session 6 – Defending the defenders: Upholding justice in the face of
lawfare
Continuing the issue of protecting journalists and their defence, this session reiterated
Hemaiah Center’s mission to foster collaboration and build regional and global
partnerships, specifically regarding our current focus on protecting journalists and media
activists in areas of conflict. Discussions expanded on this with insights from Rebecca Vincent, Director of
Campaigns at Reporters Without Borders, and Jose Zamora, Chief Communications
Officer at EXILE, who underscored the alarming rise in legal warfare (“lawfare”)
targeting journalists and their defenders. Additionally, the panel brought together legal
experts such as Caoilfhionn Gallagher KC, a Barrister from Doughty Street Chambers,
and Ginna Anderson, Associate Director at the American Bar Association’s Center for
Human Rights, to explore the heightened risks faced by lawyers representing
journalists.
The weaponisation of law to silence critical voices is a growing threat, as speakers
emphasised the urgent need for international legal frameworks to protect journalists and
lawyers who defend them. Zamora spoke movingly about his father’s 800-day detention
in Guatemala, where his legal team was systematically dismantled—ten lawyers faced
imprisonment or exile. Zamora senior is a journalist whose work has criticised
successive governments.
The global crackdown on free media is affecting the safety of journalists and their legal
defenders. Rebecca Vincent, an international lawyer, highlighted the dangers lawyers
face when trying to protect journalists, noting that she herself was detained in Kuwait
and has been subjected to cyberattacks and threats to her family. The panel called for collaboration between the legal, humanitarian, and journalist
sectors to defend freedom of speech and the right to legal representation.
Session 7 – From tipping point to turning point: Identifying support for
journalists in exile
The panel addressed the rising need to support journalists forced into exile in the final
session. Journalists from countries like El Salvador, Belarus, and Afghanistan shared
their experiences working from exile, reflecting on how restrictive regimes continue to
suppress free speech and media independence. Najafizada and others discussed how,
although regimes like the Taliban attempt to make it increasingly difficult for domestic
journalists to operate, they shed an optimistic view of their exile. Najafizada stated,
“They have access, and I have freedom,” emphasising the importance of collaborating
with domestic journalists to relay the information they could not.
Laura Aguirre from El Salvador and Natalia Belikova from Belarus emphasised that
exile does not mark the end of a journalist’s career. The panellists provided practical
advice for journalists facing exile, including the importance of financial planning and
leveraging international networks for support. Despite the severe challenges, they
underscored the resilience of journalists worldwide in the face of authoritarian
repression. Panellists:
● Lotfullah Najafizada, Founder and CEO Amu TV
● Laura Aguirre, Strategic Director of Alharaca and Director of Development at
Sembramedia, Alharaca and Sembramedia
● Natalia Belikova, Head of International Cooperation, Press Club Belarus
Conclusion
In conclusion, the conference illuminated the complex issues surrounding AI, press, and
legal freedoms. Attending The Trust Conference reaffirmed Hemaiah Center’s belief in
the power of collaboration and knowledge-sharing to address complex human rights
issues. Key trends that emerged from the conference—such as the urgent need for
global cooperation to bolster legal and international support for journalists and those
who defend them and the need for better regulations and ethical guidelines to ensure
that AI and digital tools enhance, rather than undermine, democratic processes and
access to information —will guide the Center’s future initiatives.
To learn more about how Hemaiah Center is working to promote human rights and
social justice or to explore potential collaborations, please visit our website at
hemaiahcenter.org. Together, we can create a more just and equitable world forall.