

Center for Research Computing
Senior Associate Director of the Center for Research Computing; Professor of the Practice
The New York Times
June 26, 2025
Researchers at the University of Notre Dame found last year that inauthentic accounts generated by A.I. tools could readily evade detection on eight major social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X and Meta’s three platforms, Facebook, Instagram and Threads.
Airman Magazine
November 06, 2024
Paul Brenner, who is also a computing and data science professor at the University of Notre Dame, adds, “In both my military and academic work, I’ve seen how good data management transforms decision-making. Airmen need to be as proficient with data as they are with any weapon in their arsenal.”
ABC57
Video
October 30, 2024
"As AI continues to grow and its capability and complexity, we have to recognize it will be impossible to discern which it was created from," Notre Dame Center for Research Computing Director Paul Brenner said.
ABC57
Video
October 17, 2024
ABC57 welcomed Paul Brenner, Director in the Center for Research Computing at the University of Notre Dame, to discuss whether social media platforms are doing enough to stop harmful A.I. bots. Brenner, author of the new research study, explains the University of Notre Dame analyzed A.I. bot policies and mechanisms of LinkedIn, Mastodon, Reddit, TikTok, X (Twitter), and Meta platforms, including Facebook, Instagram, and Threads.
Business Insider (India)
October 16, 2024
"Despite what their policy says or the technical bot mechanisms they have, it was very easy to get a bot up and working on X. They aren't effectively enforcing their policies," said Paul Brenner, a Director at Notre Dame.
NewsGram
October 15, 2024
New research from the University of Notre Dame analyzed the AI bot policies and mechanisms of eight social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X (formerly known as Twitter) and Meta platforms Facebook, Instagram and Threads. Then researchers attempted to launch bots to test bot policy enforcement processes.
Futurity
October 15, 2024
“As computer scientists, we know how these bots are created, how they get plugged in, and how malicious they can be, but we hoped the social media platforms would block or shut the bots down and it wouldn’t really be a problem,” says Paul Brenner, a faculty member and director in the Center for Research Computing at the University of Notre Dame and senior author of the study.
The New Scientist
September 02, 2024
It has become much easier “to customise these large language models for specific audiences with specific messages”, says Paul Brenner at the University of Notre Dame in Indiana.
Tech Times
February 28, 2024
Paul Brenner, a faculty member at Notre Dame and the study's senior author, highlighted users' significant challenge in discerning between human and AI-generated content.
Tech Xplore
February 27, 2024
"They knew they were interacting with both humans and AI bots and were tasked to identify each bot's true nature, and less than half of their predictions were right," said Paul Brenner, a faculty member and director in the Center for Research Computing at Notre Dame and senior author of the study.
Futurity
February 27, 2024
Researchers at the University of Notre Dame conducted a study using AI bots based on large language models—a type of AI developed for language understanding and text generation—and asked human and AI bot participants to engage in political discourse on a customized and self-hosted instance of Mastodon, a social networking platform.