91视频

Paul Brenner

Senior Associate Director of the Center for Research Computing; Professor of the Practice

Center for Research Computing

Office
803 Flanner Hall
Notre Dame, IN 46556
Email
pbrenne1@nd.edu

Senior Associate Director of the Center for Research Computing; Professor of the Practice

  • Cybersecurity
  • High performance and cloud computing cyberinfrastructure
  • Research computing for scientific discovery
  • AI and agentic bot (chatbot) system safety
  • Cyberinfrastructure for national defense

Brenner’s 91视频

Brenner in the News

Researchers at the University of Notre Dame found last year that inauthentic accounts generated by A.I. tools could readily evade detection on eight major social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X and Meta’s three platforms, Facebook, Instagram and Threads.

Airman Magazine

Paul Brenner, who is also a computing and data science professor at the University of Notre Dame, adds, “In both my military and academic work, I’ve seen how good data management transforms decision-making. Airmen need to be as proficient with data as they are with any weapon in their arsenal.”  

Video

"As AI continues to grow and its capability and complexity, we have to recognize it will be impossible to discern which it was created from," Notre Dame Center for Research Computing Director Paul Brenner said.

Video

ABC57 welcomed Paul Brenner, Director in the Center for Research Computing at the University of Notre Dame, to discuss whether social media platforms are doing enough to stop harmful A.I. bots. Brenner, author of the new research study, explains the University of Notre Dame analyzed A.I. bot policies and mechanisms of LinkedIn, Mastodon, Reddit, TikTok, X (Twitter), and Meta platforms, including Facebook, Instagram, and Threads.

Business Insider (India)

"Despite what their policy says or the technical bot mechanisms they have, it was very easy to get a bot up and working on X. They aren't effectively enforcing their policies," said Paul Brenner, a Director at Notre Dame.

NewsGram

New research from the University of Notre Dame analyzed the AI bot policies and mechanisms of eight social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X (formerly known as Twitter) and Meta platforms Facebook, Instagram and Threads. Then researchers attempted to launch bots to test bot policy enforcement processes.

Futurity

“As computer scientists, we know how these bots are created, how they get plugged in, and how malicious they can be, but we hoped the social media platforms would block or shut the bots down and it wouldn’t really be a problem,” says Paul Brenner, a faculty member and director in the Center for Research Computing at the University of Notre Dame and senior author of the study.

The New Scientist

It has become much easier “to customise these large language models for specific audiences with specific messages”, says Paul Brenner at the University of Notre Dame in Indiana.

Tech Times

Paul Brenner, a faculty member at Notre Dame and the study's senior author, highlighted users' significant challenge in discerning between human and AI-generated content.

Tech Xplore

"They knew they were interacting with both humans and AI bots and were tasked to identify each bot's true nature, and less than half of their predictions were right," said Paul Brenner, a faculty member and director in the Center for Research Computing at Notre Dame and senior author of the study.

Futurity

Researchers at the University of Notre Dame conducted a study using AI bots based on large language models—a type of AI developed for language understanding and text generation—and asked human and AI bot participants to engage in political discourse on a customized and self-hosted instance of Mastodon, a social networking platform.