Title: Interactive Perception: Recent Progress and Open Questions
Abstract: This talk will survey some recent results in interactive perception. We will sketch a taxonomy for the field, and discuss why (in spite of the importance of feedback being clear for more than 50 years) advances in perception and action in robotics have often proceeded independently of each other. We will also attempt to list some open questions and challenges for the field of interactive perception.
Bio: Gaurav S. Sukhatme is the Fletcher Jones Professor of Computer Science (joint appointment in Electrical Engineering) at the University of Southern California (USC). He serves as the Executive Vice Dean at the USC Viterbi School of Engineering. He received his undergraduate education at IIT Bombay in Computer Science and Engineering, and M.S. and Ph.D. degrees in Computer Science from USC. He is the co-director of the USC Robotics Research Laboratory and the director of the USC Robotic Embedded Systems Laboratory, which he founded in 2000. His research interests are in multi-robot systems and sensor/actuator networks. He has published extensively in these and related areas. He is a fellow of the AAAI, the IEEE and a recipient of the NSF CAREER award and the Okawa foundation research award.
Prof. Todd Murphey
Neuroscience and Robotics Laboratory Mechanical Engineering
Title: Adaptive Exploration for Active Perception
Abstract: How robots should move to obtain data and how robots should process data are intimately connected. Moreover, choosing motions depends on the physics of the sensors, the physics of motion, and representations of uncertainty. I will talk about about how information measures can be incorporated into control synthesis for nonlinear systems using ergodicity, a concept from physics and information theory related to coverage. Ergodic control can be used for improving information, and is particularly relevant in the presence of spatial uncertainties such as visual occlusion. Moreover, ergodic trajectories can be used to establish measurement models that define features one wishes to use in the future. Examples from biology, vision-based tracking, and contact-based sensing will serve as examples throughout the talk.
Bio: Dr. Todd D. Murphey is a Professor of Mechanical Engineering and Physical Therapy & Human Movement Sciences at Northwestern University. He received his B.S. degree in mathematics from the University of Arizona and the Ph.D. degree in Control and Dynamical Systems from the California Institute of Technology. His laboratory is part of the Neuroscience and Robotics Laboratory, and his research interests include computational methods for mechanics and real-time optimal control, human-in-the-loop control, and information theory in physical systems. Honors include the National Science Foundation CAREER award in 2006, membership in the 2014-2015 DARPA/IDA Defense Science Study Group, and Northwestern’s Charles Deering McCormick Professorship of Teaching Excellence. He is a Senior Editor of the IEEE Transactions on Robotics.
Title: Robotic Data Gathering in the Wild
Abstract: Mobile robots equipped with various sensors can gather data at unprecedented spatial and temporal scales and resolution. Over the years, our group has developed autonomous surface, ground and aerial robot teams for yield estimation, farm and animal habitat monitoring. I will give an overview of these projects as well as some of the fundamental algorithmic problems we studied along the way.
Bio: Volkan Isler is a professor of computer science and engineering at the University of Minnesota. He is a fellow of the Informatics Institute and the Institute on the Environment, and 2010-12 McKnight Land-grant professor. He is currently an Editor for the IEEE Robotics and Automation Society’s Conference Editorial Board. In the past, he has served as an associate editor for IEEE Transactions on Robotics and IEEE Transactions on Automation Science and Engineering. He was the chair of the RAS Technical Committee on Networked Robots. In 2008, he was awarded the NSF CAREER Award.
Title: Large-Scale Online POMDP Planning in Real Time
Abstract: How can a robot act robustly with imperfect control and noisy sensing? The partially observable Markov decision process (POMDP) provides a principled general approach for decision-making under control and sensing uncertainty. However, solving POMDPs exactly is computational intractable in the worst case, because the robot must reason about myriad future scenarios over many time steps. I will present our recent work aimed at overcoming the computational challenges and enabling real-time online POMDP planning, through probabilistic sampling, parallelization, and approximation.
Bio: David Hsu is a professor of computer science at the National University of Singapore (NUS) and a member of NUS Graduate School for Integrative Sciences & Engineering. He is an IEEE Fellow. His research spans robotics and AI. In recent years, he has been working on robot planning and learning under uncertainty for human-centered robots. He has chaired or co-chaired several major international robotics conferences, including WAFR 2004 and 2010, RSS 2015, and ICRA 2016.
Prof. Stephan Weiss
Control of Networked Systems Group
Title: Observability-Aware Paths: Optimal System Inputs for Fast State Convergence
Abstract: Essential to any autonomous (mobile) robot is its robust and accurate estimation of the current state to answer the question “where am I?”. In particular for mobile systems, the available sensor signals and the device configuration may change during a mission. A multi-sensor suite and online estimation of the device configuration help to mitigate this issue but at the same time introduce more complexity and, more importantly, unobservable modes to the state estimator depending on the motion (i.e. input) of the system. In this talk we will discuss a novel way to look at the system observability properties to render the binary observability information a smooth metric of quality of observability depending on the system input. We will see that this formulation helps to define best observable paths for fast and accurate state convergence — also online in real-time. The formulation is on the non-linear time-continuous system and can additionally bound the paths to a feasible, obstacle free volume.
Bio: Stephan Weiss is Full Professor of Robotics and head of the Control of Networked Systems Group at the Alpen-Adria-Universität (AAU) in Klagenfurt, Austria. He received both his MSc in Electrical Engineering and Information Technology in 2008 and his Ph.D. in 2012 from the Eidgenössische Technische Hochschule (ETH) Zurich, Switzerland. His Ph.D. Thesis on “Vision Based Navigation for Micro Helicopters” first enabled GPS independent navigation of small UAVs using on-board visual-inertial state estimation. His algorithms were the key to enable the Mars Helicopter Scout proposal and corresponding proof-of-concept technology demonstration at NASA’s Jet Propulsion Laboratory where he worked from 2012 until 2015 as Research Technologist in the Mobility and Robotic Systems Section and where he lectured at the California Institute of Technology.
Title: Implications and opportunities of linking action and perception
Abstract: Tight coupling of action and perception seems to be a good idea… most people will probably agree with that statement. But we should not forget that robotics and computer vision for many decades were rather disjunct fields with different problems, objectives, methods, conferences, journals, and rather disjunct communities. Maybe joining parts of these two fields after all this time (and progress) is more disruptive than it would seem at first glance, at least for some aspects of our work? Are we sure that all the wisdom we accumulated thus far simply transfers to a world in which action and perception are regarded as inextricably linked? From the perspective of control theory this question seems silly: control always and very naturally combined action and perception. Dual control naturally incorporates exploration and model building. In some way, we could cast all of AI and robotics as dual control. So are we done? In this talk I would like to take the position that we do not exclusively face an algorithmic challenge, we also face a signal challenge. The bottleneck towards progress is a lack of understanding of what perceptual regularities we exploit in today’s rich sensor streams. And to identify the “right” signals, we need to understand the combined space of action and perception. Animals seem to operate in this space, but without explicit access to or intuition about it.
Bio: Oliver Brock is the Alexander-von-Humboldt Professor of Robotics in the School of Electrical Engineering and Computer Science at the Technische Universität Berlin in Germany. He received his Diploma in Computer Science in 1993 from the Technische Universität Berlin and his Master’s and Ph.D. in Computer Science from Stanford University in 1994 and 2000, respectively. He also held post-doctoral positions at Rice University and Stanford University. Starting in 2002, he was an Assistant Professor and Associate Professor in the Department of Computer Science at the University of Massachusetts Amherst, before to moving back to the Technische Universität Berlin in 2009. The research of Brock’s lab, the Robotics and Biology Laboratory, focuses on mobile manipulation, interactive perception, grasping, manipulation, soft material robotics, interactive machine learning, deep learning, motion generation, and the application of algorithms and concepts from robotics to computational problems in structural molecular biology. He is the president of the Robotics: Science and Systems foundation.
Title: Active Perception, Are we there yet? No!
Abstract: I will show that there are still tons of problems, involving the coupling of perception and control, which are still unsolved and try to convince you that this is crucial in order to achieve autonomous robotics. My applications will regard mainly vision controlled quadrotors.
Bio: Davide Scaramuzza (born in 1980, Italian) is Professor of Robotics and Perception at both departments of Neuroinformatics (ETH and University of Zurich) and Informatics (University of Zurich), where he does research at the intersection of robotics, computer vision, and neuroscience. Specifically he investigates the use of standard and neuromorphic cameras to enable autonomous, agile, navigation of micro drones in search-and-rescue scenarios. He did his PhD in robotics and computer vision at ETH Zurich (with Roland Siegwart) and a postdoc at the University of Pennsylvania (with Vijay Kumar and Kostas Daniilidis). From 2009 to 2012, he led the European project sFly, which introduced the PX4 autopilot and pioneered visual-SLAM-based autonomous navigation of micro drones. For his research contributions in vision-based navigation with standard and neuromorphic cameras, he was awarded the IEEE Robotics and Automation Society Early Career Award, the SNSF-ERC Starting Grant, a Google Research Award, KUKA, Qualcomm, and Intel awards, the European Young Research Award, the Misha Mahowald Neuromorphic Engineering Award, and several conference paper awards.
He coauthored the book “Introduction to Autonomous Mobile Robots” (published by MIT Press; 10,000 copies sold) and more than 100 papers on robotics and perception published in top-ranked journals (TRO, PAMI, IJCV, IJRR) and conferences (RSS, ICRA, CVPR, ICCV).
In 2015, he cofounded a venture, called Zurich-Eye, dedicated to the commercialization of visual-inertial navigation solutions for mobile robots, which later became Facebook-Oculus Switzerland and Oculus’ European research hub. He was also the strategic advisor of Dacuda, an ETH spinoff dedicated to inside-out VR solutions, which later became Magic Leap Zurich.