Prof. John McDonald

Funded Investigator, Lero

 

 

Professor John McDonald is a Professor in the Department of Computer Science at Maynooth University. His research interests include computer vision, robotics and AI, focussing on the development of spatial perception and intelligence for autonomous mobile robotics. He is currently a Funded Investigator in Lero and collaborator on the SFI Blended Autonomy Vehicles Spoke, led by Lero.

I work on robotics and robot perception, where I develop algorithms to enable robots to operate autonomously in real-world environments. This can include a wide variety of platforms ranging from humanoid robots, to drones, to autonomous vehicles. In order for a robotic system to be autonomous, it needs the ability to sense what is around it, build a model of its surroundings from this data, and finally to use that model to make decisions. This is the focus of my research; the development of algorithms for robots to build these models and using them to interpret the world around them.

The common thread that runs through all Lero’s research is a focus on software. As robotics has come of age and we are seeing robotic systems move from the research lab into the real world, one of the principal factors that is causing that transition is advances in the algorithms they use and the software that controls their operation. As those systems become more prevalent in our everyday lives, the ways in which we develop that software and ensure that it operates as intended has become ever more critical. This presents a new set of challenges, because many of the techniques and settings for software have changed from, for example, the software that you might use on your laptop or smartphone.

Before joining Lero in 2018, I would have seen the centre as focussed on this more traditional software, however this is not the case, the research really is much broader. When I first met Professor Brian Fitzgerald, I was impressed to see that Lero had a strong research programme around software for autonomy. In fact, Lero’s activities include areas such as robotics, AI, computer vision and wider areas such as cyber-physical systems. This has recently been enhanced by a major new programme in autonomy through the SFI Blended Autonomous Vehicles Spoke which is led by Lero and brings researchers together with industry partners from multiple sectors to solve critical problems.

I think what’s very exciting about Lero is that it brings together such a mix of skills, ranging from people working on formal methods (which looks at mathematical techniques for proving the correctness of software), to people like myself who are focused on a very specific domain such as robotics. Lero provides a forum for interactions and collaboration across all these disciplines. This diversity allows you to look across boundaries and see what problems may be addressed uniquely because of the configuration and collection of people within Lero, which I believe makes it quite unique. As an SFI research centre, the focus is not just on how we develop and apply software today, but how it will be developed and applied well into the future. Given the disruption that autonomous systems are bringing to software, involving such a broad set of disciplines is crucial.

"My work sits at the intersection of computer vision, robotics and AI. Computer vision involves building algorithms that can automatically extract meaningful information from images. We see this technology in action when our mobile phone detects the faces in our camera or when Zoom changes the background in your video stream."

 

I come to Lero with a particular set of skills and expertise, and one of my roles is to be open and engage with other researchers: not just to sit in a silo of robotics research. Being part of Lero also provides me with opportunities to jointly engage in different forums and settings, for example, speaking at academic, public and industry events with Lero colleagues from other Irish institutes of technology and universities.

My work sits at the intersection of computer vision, robotics and AI. Computer vision involves building algorithms that can automatically extract meaningful information from images. We see this technology in action when our mobile phone detects the faces in our camera (i.e. the little yellow boxes that pop up around your face), or when Zoom changes the background in your video stream. I’ve worked in a lot of different areas within this field since 1996, however in 2007 I consolidated my research around robot perception, and in particular a problem known as Simultaneous Localisation and Mapping, or SLAM. SLAM has probably been one of the most studied problems in robot perception since the mid-eighties. The problem we are trying to solve is this: in order for a robot to work in a real-world environment, it has to be able to build a model of the environment and use that model to interact with the world. In essence, what SLAM is designed to do is build a three-dimensional map of the environment from a moving sensor, whilst simultaneously estimating the motion of the sensor relative to the map.

The SLAM problem is seen as a key ingredient to any autonomous mobile robotic system that is going to be able to operate over long periods of time, which is the broader scientific challenge that we are trying to address. If you have an autonomous vehicle for example that approaches road works, it has to be able to adapt in a graceful and safe way to the changed physical environment. It does this by taking its sensor data over time, as it moves through its environment, and augmenting its model of the world around it with that sensor data, so that it ensures that the model agrees with reality.

Within Lero, one of the projects that I lead is jointly funded by Valeo, a world-leading automotive components company with a significant centre in Tuam, where they are focused on vehicle-based vision systems. Through this project I have built a team in Maynooth that's actively collaborating with Valeo to develop next-generation perception systems and algorithms for autonomous vehicles, specifically looking at the problem of long-term autonomy. Here, the challenge is for autonomous vehicles to operate over months or years, where you need to be able to deal with very significant changes in the environment. For example, if your autonomous vehicle captures a model of your driveway on a sunny day in June, you need to make sure that model still works on a dark and rainy November evening. This extreme variation of conditions means that it becomes quite difficult to match sensor data that was captured at one point in time to another, and so we're trying to develop algorithms that will allow that to be achieved in an efficient and a reliable way.

Another problem we are addressing is something known as multi-session or collaborative SLAM where multiple vehicles share sensor data with each other. For example, if multiple vehicles are approaching a junction and one vehicle’s sensors can see a particular object, such as a cyclist, but the other vehicle cannot detect the object because of obstructions in the environment, we must consider how vehicles can effectively share that information to enhance overall safety in these situations.

Artificial Intelligence underpins a lot of what we are doing. Previously, we have been focussing on the problem of how autonomous robots can figure out where things are in space using a three-dimensional model. Now, we are moving towards something called Spatial AI, which is a term that was coined by Professor Andrew Davidson in Imperial College London, and I think frames very well what we are going to see over the next 10 years of robotics research. We now have good solutions to building 3-D models, but developing the ability to understand them and use them intelligently is the next big challenge in creating truly autonomous systems.