A goal of AI is to create machines that perceive, reason, and understand like humans do. This is hard and many years off. However, it is increasingly possible to imagine how AI might make the world accessible regardless of how one perceives, reasons, or understands. In this talk, I’ll introduce a set of grand but practical challenges in accessibility for AI that have the potential impact millions of people. To illustrate these challenges I’ll describe on-going research spanning a breadth of domains, including visual assistance for people who are blind, dyslexia detection and intervention, adaptive sports,real-time audio captioning, and augmented and assistive communication.
Jeffrey P. Bigham an Associate Professor and PhD Director in the Human-Computer Interaction and Language Technologies Institutes in the School of Computer Science at Carnegie Mellon University. He is currently spending time at Apple, where he is starting a new Machine Learning + Accessibility Research group. Dr. Bigham’s research combines crowdsourcing and machine learning to make novel deployable interactive systems, and ultimately solve hard problems in computer science. Many of these systems are designed with a deep understanding of the needs of people with disabilities to be useful in their everyday lives. He received my B.S.E degree in Computer Science from Princeton University in 2003, and received his Ph.D. in Computer Science and Engineering from the University of Washington in 2009. He has received the Alfred P. Sloan Foundation Fellowship, the MIT Technology Review Top 35 Innovators Under 35 Award, and the National Science Foundation CAREER Award.