November Newsletter

Director’s Note:

It is with great excitement that we release our first Newsletter on the occasion of the first anniversary of the Academic Alliance for AI Policy, whose mission, as outlined in its manifesto and captured in its Charter, is to facilitate policy research and build capacity among academics, reach out to local and national policy makers, and engage with media and the public on AI-related issues. The launch of the Newsletter is an important element in pursuing this mission.

 

To that end, the Newsletter will include updates in AI research and policy, as well as a highlight of key events in and around AAAIP.  Each issue will also celebrate the impactful work of our members. In this inaugural issue, we proudly spotlight Daniel and Kaylyn Schiff for their collaborative efforts in developing the AI Governance and Regulatory Archive (AGORA).

 

The success of the Newsletter, as all our other efforts, will depend on the active contribution of our members. Please share your research, news, and events with us in an ongoing basis, and let us know if you’d like your work to be the featured in the future editions.

- Hamid R. Ekbia

On The Topic: New AI Developments

Fabricated Scientific Papers on Google Scholar:

The study investigates how fabricated scientific papers, generated by AI models like GPT, are being indexed on platforms like Google Scholar, posing significant risks to the credibility of academic research. It highlights key features of these fake papers, such as fabricated references, and examines how they can spread misinformation within the academic community. The research additionally suggests proactive strategies to identify and prevent such fabricated content to safeguard the integrity of scientific literature.

LEARN MORE

Can Large Language Models (LLMs) Generate Novel Research Ideas?

Recent advances in large language models have sparked curiosity about their ability to autonomously contribute to scientific discovery. In a large-scale study conducted by researchers from Stanford, over 100 NLP experts reviewed 49 research ideas produced under three different conditions: human-generated, AI-generated, and AI-generated with human ranking. The study revealed that ideas generated by LLMs were rated as more novel compared to human-generated ones, especially after refinement by human experts.

LEARN MORE

Mind the Gap: Military Risks from Commercial AI Models.

A new report warns that commercial AI foundation models, designed with dual-use capabilities, pose risks when repurposed for military intelligence, surveillance, target acquisition, and reconnaissance (ISTAR). The paper argues that policies are overly focused on preventing AI-enabled chemical, biological, radiological, and nuclear threats while overlooking real-world ISTAR dangers. Which include misuse of personal data in military operations and expanded attack vectors for adversaries. To effectively mitigate risks, the report calls for insulating military AI from commercial models, increasing data security, and addressing gaps in current governance measures​​.

LEARN MORE

Member Spotlight: Daniel S. Schiff & Kaylyn Jackson Schiff

At AAAIP, we celebrate and share the impactful work of our members. This month, we proudly spotlight Daniel and Kaylyn Jackson Schiff for their collaborative efforts in developing the AI Governance and Regulatory Archive (AGORA).

Daniel Schiff: Assistant Professor of Technology Policy at Purdue University’s Department of Political Science and the Co-Director of Governance and Responsible AI Lab.

Kaylyn Jackson Schiff: Assistant Professor in the Department of Political Science at Purdue University and Co-Director of the Governance and Responsible AI Lab.

AGORA

AGORA is an exploration and analysis tool for AI-relevant laws, regulations, standards, and other governance documents from the United States and around the world. Drawing on original Emerging Technology Observatory data, AGORA includes summaries, document text, thematic tags, and filters to help you quickly discover and analyze key developments in AI governance. Its easy-to-use interface includes plain-English summaries, detailed metadata, and full text for hundreds of AI-focused laws and policies, with new data being added continuously, as well as several resources for academic researchers exploring global trends in emerging technology.

READ MORE ON AGORA

TRY AGORA YOURSELF

News: AI

EU AI Act: Early preparation could give businesses competitive edge.

ARTICLE

AI Governance Gap: 95% of firms haven’t implemented AI frameworks.

ARTICLE

The Digital Deception Landscape: Academic Insights on Misinformation and Deepfakes

The Academic Alliance for AI Policy hosted our first webinar panel discussion last week! We sincerely thank all the scholars and students who joined us for this exciting new beginning. The webinar panel engaged in expert discussions that critically analyzed the evolving landscape of misinformation, focusing on the role of deepfakes and AI-driven content. Did you miss the panel? We have provided a video recording below. And don’t forget to join our mailing list to stay in the loop about future events!

Get Involved: Share Your Academic Events!

We aim to promote the collaboration and impact of our AAAIP members. To have your academic events featured in our monthly newsletter or shared through are mailing list, please complete the form below. Let’s showcase the exciting work happening at your institution or organization!

Please contact Lynnell Cabezas at aaaip@syr.edu.

Not a member and want to receive the monthly newsletter in your inbox? Sign up for our mailing list below!

Next
Next

AAAIP Director Comments on Recent Efforts to Regulate AI Training Data in U.S. and Europe