Donald Bren School of Information and Computer Sciences
University of California, Irvine
6210 Donald Bren Hall
Irvine, CA 92697-3425
The Hasso Plattner Institute (HPI), dedicated to pioneering research into information technology, announced on Feb. 26, 2020 the opening of its newest research school, the HPI Research Center in Machine Learning and Data Science at UCI. The goal is to promote research and educational activities in these two fields between the two leading universities.
HPI at UCI will fund three-year fellowships for 15 Ph.D. students in the Donald Bren School of Information and Computer Sciences (ICS), starting with five students in spring 2020 and adding five students each winter over the next two years who will be jointly supervised by eight ICS professors while closely being integrated into HPI’s research activities.
In particular, the fellowships will fund research into the following three broad areas, which will make artificial intelligence more adaptive, safe and human-centered:
Contemporary machine learning systems typically require very large amounts of human-annotated data to train on, are largely unable to take advantage of expert knowledge (e.g., in medicine) or direct human instruction (e.g., in robotics), and have difficulty adapting to new environments without human intervention and design. We will develop theories and methods that support AI systems that can begin working with relatively little or no human-annotated data, and then adapt and evolve in an online manner to real-world environments. Our research will lead to AI systems that are much more intelligent and efficient about how they use data compared to the purely data-driven ML algorithms that are prevalent today. These systems will employ techniques such as reinforcement learning and active exploration to adapt to their surroundings in a more human-like manner, and incorporate modeling frameworks like probabilistic programming that allow experts to directly and flexibly specify how an AI agent should behave and adapt.
Although machine learning systems are increasingly being used in life-critical applications such as medical decision-making and autonomous driving, these systems are still relatively brittle and non-robust when encountering situations and data beyond what a model was trained on. In effect, current machine learning systems “do not know what they do not know” and have difficulty in calibrating their predictions and assessing their own limitations. Our research under this theme will produce new theories and methods that will make AI and machine learning systems more “self-aware” and robust when encountering situations that are beyond the expertise of the model, leading in turn to new types of hybrid systems where AI algorithms can hand-off decisions to humans in a flexible and robust manner.
Current AI and machine learning systems tend to be “black-box” in nature in that they are relatively primitive in the richness of the human interaction they can support, particularly in terms of transparency of their predictions and decisions. Under this theme we will investigate how machine learning systems can better work with human users and within society as a whole. We will develop new AI methods and systems that are able to effectively explain their decisions, communicate their reasoning, and build trust with human users. In addition we will develop new AI methods that are fairer than current systems, by assessing and correcting implicit biases (e.g., due to gender or race) in data-driven models. This research work promises to lead to a new generation of AI systems that are much more human-aware and society-aware than current approaches, and by implication, much more effective and broadly useful to society as a whole.