John Laird

Keynote: John Laird

Learning Fast and Slow: Levels of Learning in General Autonomous Intelligent Agents

Bio: John E. Laird is the John L. Tishman Professor of Engineering at the University of Michigan, where he has been since 1986. He received his Ph.D. in Computer Science from Carnegie Mellon University in 1983 working with Allen Newell. From 1984 to 1986, he was a member of research staff at Xerox Palo Alto Research Center. He is one of the original developers of the Soar architecture and leads its continued evolution. He is a founder of Soar Technology, Inc., and the Center for Integrated Cognition. He is a Fellow of AAAI, AAAS, ACM, and the Cognitive Science Society. In 2018, he was awarded the Herbert A. Simon Prize for Advances in Cognitive Systems.

Abstract: General autonomous intelligent agents with ongoing existence have many challenges when it comes to learning. On the one hand, they must continually react to their environment, focusing their computational resources and using their available knowledge to make the best decision for the current situation. On the other hand, they need to learn everything they can from their experience, building up their knowledge so that they are prepared to make decisions in the future. We posit two distinct levels of learning in general autonomous intelligent agents. Level 1 (L1) are architectural learning mechanisms that are innate, automatic, effortless, and outside of the agent’s control. Level 2 (L2) are deliberate learning strategies that are controlled by the agent's knowledge, whose purpose is to create experiences for L1 mechanisms to learn from.

We describe these levels and provide examples from our research in interactive task learning (ITL), where an agent learns a novel task through natural interaction with an instructor. ITL is challenging because it requires a tight integration of many of the cognitive capabilities embodied in human-level intelligence: multiple types of reasoning, problem solving, and learning; multiple forms of knowledge representations; natural language interaction; dialog management; and interaction with an external environment – all in real time. Moreover, any successful approach must be general – the agent cannot be pre-engineered with the knowledge for a given task – everything about a task must be learned or transferred from other tasks.

Our agent builds on our research with the Soar cognitive architecture, using a combination of innate L1 mechanisms and L2 strategies to learn ~60 puzzles and games, as well as mobile robotic tasks. Our agent is embodied in a tabletop robot, a small mobile robot, and a Fetch robot. This research is supported by ONR and AFOSR.

Click below image for talk recording:

 

Kristian Hammond

Kristian Hammond

Humanizing the Machine with Language: How the Future Gets Written

Bio: Kristian Hammond is a Chief Scientist at Narrative Science, a company focused on automated narrative generation from data. He is also a professor of Computer Science and Journalism at Northwestern University and a researcher in the areas of human-machine interaction, context-driven information systems and artificial intelligence. In 1999, he founded Northwestern University's Intelligent Information Laboratory (InfoLab). At the InfoLab, his team is creating technology that bridges the gap between people and the information they need. From 2000 to 2001, he also enjoyed a run as the weekly technology correspondent for WTTW's Chicago Tomorrow.

Abstract:

Artificial Intelligence (AI) is transforming the world in ways that no other set of technologies ever have. Technologies of machine learning, text analysis, recommendation, and natural language processing are all being applied to a wide variety of problems and yet most of us still struggle to understand what their results mean or even the numbers behind them.

The numbers alone simply do not provide us with what we really need: information and insight. The data and the algorithms are only the first step in finding the insights we want and making them useful to the decision makers who need them.

In this talk, I will present an approach to connecting humans with the intelligent systems that serve them using the tool that is most natural to us, language.  We will look at how both Natural Language Processing (to get to a user’s information needs) and Natural Language Generation (to support the machine’s ability to communicate) can play the crucial role of bridging the gap between the world of raw data and our need for understandable insights. We will dive into examples from legal reasoning, business, education and everyday life to show how the power of language can provide us all with the insights that are still trapped in the wealth of data we now control.

Kelsey Huebner

Kelsey Huebner

AI at Microsoft

Bio: Kelsey grew up in southeast Iowa and attended the University of Iowa from 2008 – 2012 earning her Bachelor’s in Informatics and Studio Arts with a Minor in Computer Science. She moved out of Iowa to Seattle to pursue her development career and joined Microsoft in July 2014. She has since been in Xbox, Developer Experiences, the Chief Technology Officer’s partnership team, and Commercial Software Engineering. She has been a software engineer and a technical trusted advisor for Microsoft’s top media and entertainment customers, developing applications, AI solutions and infrastructure in the cloud.

Abstract:

In this talk, I will share my journey at Microsoft and broad learnings in Artificial Intelligence and experiences with the metaverse. Together, we will explore the possibilities of language processing with OpenAI, what is Responsible AI, and how we develop and use data-driven experiences. In addition, I will touch on the metaverse, my experiences with the Hololens and how gaming is paving the way in the metaverse.

Click below image for talk recording:

Casey DeRoo

Casey DeRoo

Leveraging Outlier Identification Algorithms to Find “Strange” Astronomical Sources

Bio: Casey DeRoo studies the UV/X-ray sky and the instrumentation needed to make effective astronomical measurements at these wavelengths. His research focuses on (1) the fabrication, characterization, and use of diffraction gratings in spectrometers; (2) on thin, high performance mirrors for high energy telescopes; and (3) on identifying unusual sources in the X-ray sky for follow-up study.  Prof. DeRoo graduated from the University of Iowa in 2016, served as an Astrophysicist at the Smithsonian Astrophysical Observatory, and joined the faculty of University of Iowa in 2018. He was awarded a Nancy Grace Roman Technology Fellow by NASA’s Astrophysics Division in 2021.  

Abstract: 

Astronomy is waking up to a data problem. The size of astronomical datasets dwarfs the analysis capacity of professional astronomers. With the advent of James Webb Space Telescope, designed to probe the cosmos in its infancy, and the Vera Rubin Telescope, built to make a 10 year movie of half the sky, this problem is slated to grow in size in the coming decade. 

We are studying the use of unsupervised random forests (RFs) to sift through these rich datasets to find objects that are “outliers” – those that don’t fit our standard models of astronomical sources. This technique, already employed for spectra in the Sloan Digital Sky Survey, has been generalized and applied to the Chandra Source Catalog v2.0, an catalog from the Chandra X-ray Observatory observing the extreme conditions of the X-ray sky. We discuss our work in understanding how to minimize bias, understand how our selection of algorithm “metaparameters” impacts our results, and offer a peek at the strange astronomical sources we’ve found so far. 

Click below image for talk recording: