Distinguished Professor Margaret Burnett to deliver opening keynote at Intelligent User Interfaces '24 Conference in March
Margaret Burnett’s research was among the earliest works in human-centric aspects of Si systems. Together with her students and collaborators, she co-founded the area of end-user software engineering for both traditional software and for AI, and has also done seminal work in the subarea of Explainable AI (XAI). She is currently working on how to improve the inclusivity of human-AI user experiences. Burnett is a University Distinguished Professor in the School of EECS at Oregon State University. She holds 4 patents; has received 10 best paper awards/honorable mentions and 5 Long-Term Impact awards (one from IUI); and has received multiple mentoring, service, and research awards. She is an ACM Fellow, and was elected to ACM's CHI Academy in 2016 as one of the "principal leaders of the field" of HCI. In 2023, she became a member of the Steering Committee of the Academic Alliance on AI Policy (AAAIP).
In the keynote presentation, Dr. Burnett will discuss how to enable diverse individuals to assess an AI agent’s “goodness” according to their own needs. The conference will take place March 18-21 in Greenville, South Carolina. View the talk abstract below.
“Mission: To enable diverse mere mortals to assess an AI agent’s “goodness” for their own needs”
As AI agents become more and more prevalent in everyday technology, more and more individuals -- from every walk of life, at every level of education, across the entire socioeconomic spectrum, of every gender, race, ethnicity and age -- will need to make decisions about which agent(s) to use, and to what extent using them is the best path forward. The "mission" this talk explores is how we can enable such diverse individuals to make such decisions in ways that make their lives better instead of worse. For example, should I use an agent to enable me to be a remote caregiver for my grandmother, or should I move in with her? Should I buy semi-self-driving car X, or semi-self-driving car Y, or stay entirely manual? Will using one of these systems cost someone's life? Will it so destroy someone's privacy that their lives become filled with fear and harassment? Will my child become less intelligent over time if I give her access to LLM-powered "homework helpers"? In this talk, I don't show to how to answer any of these questions. But I show a few paths forward that may point to way(s) toward answering them, and at least one path on how not to answer them.