Podcast: Robot That Drives Better Than The Average American
In this episode, we discuss research coming out of George Mason University where the RobotiXX lab has been training robots to drive more like humans
In this episode, we discuss research coming out of George Mason University where the RobotiXX lab has been training robots to drive more like humans - as in adjust driving based on the road conditions, not texting and driving - to make agile and safe autonomous vehicles a reality.
This podcast is sponsored by Mouser Electronics.
EPISODE NOTES
(3:47) - Robot Drives Better Than The Average American
This episode was brought to you by Mouser, our favorite place to get electronics parts for any project, whether it be a hobby at home or a prototype for work. Click HERE to learn more about how our roads are evolving to be better suited for autonomous vehicles and how that might be a bad thing for human drivers.
Become a founding reader of our newsletter: read.thenextbyte.com
Transcript
What's up, friends? Today we're talking all about a team from George Mason University, which by the way is Farbod and my alma mater. So, we're super proud. But this team is working on making robots that can drive better than humans can. And the special twist there is they're looking at how robots can and can't drive well over different terrains, which is obviously a different interesting approach to autonomous driving. I think it's super interesting. So, let's jump into it.
I'm Daniel, and I'm Farbod. And this is the NextByte Podcast. Every week, we explore interesting and impactful tech and engineering content from Wevolver.com and deliver it to you in bite sized episodes that are easy to understand, regardless of your background.
Daniel: What's up peeps? Like we said, today we're talking about a robot that can drive better than your average American.
Farbod: Well, this is a special episode, right? Because finally after three years, we're doing an episode that's coming out of our alma mater, George Mason University.
Daniel: Yeah, we're proud patriots over here.
Farbod: Yeah.
Daniel: And let's just say Farbod and I, when we sent this article to each other with text message, we were like, we absolutely have to do it because it's for Mason. But also, absolutely have to do it because it's interesting.
Farbod: It's very cool.
Daniel: And that's the compromise we promised you guys we'd never make is we're not gonna compromise on telling you what's interesting and impactful and we're happy that we finally found something that we think is super interesting super impactful and also just so happens to be coming out of the college where we met.
Farbod: Yeah. Yeah.
Daniel: Let me say before we jump into today's topic though. I want to do a quick plug for our sponsor.
Farbod: Yes.
Daniel: Mouser Electronics and as I kind of alluded to at the beginning of the episode, what we're talking about today is trying to develop robotics that can drive better than humans and drive well with humans in the loop, drive well without humans in the loop, but an important part of understanding how and why robots are gonna get to this point where we can truly trust them to drive is understanding kind of the relationship they have with their environment around them. And that's a key point in the article we're talking about today, but it's also something that Mouser has done a great job as being at the forefront of the electronics industry as a supplier and as a distributor, they understand what's going on in the tech realm and they write awesome technical resources about it. The one that we've included today is directly related to today's topic. And they're talking about how the landscape needs to look in terms of roads. If we're going to get autonomous vehicles to become feasible on the roads. And they talk a lot about kind of the push and pull between doing things that make the roads safer for autonomous vehicles, but make them a little more unsafe for the human drivers that are still on the road and kind of trying to understand how regulators might toe the line between the two to make sure that the roads are safe for people using autonomous vehicles and the roads are safe for people not using autonomous vehicles. And I imagine that a lot of our regulators will kind of follow this path that the author from Mouser kind of tows us along, which is like understanding the pushes and pulls or the checks and balances in this weird realm where we try to take technology from the lab out into reality.
Farbod: Yeah, you know, most autonomous vehicles depend on so many spatial cues to determine their next actions. So, it makes sense that like, you know, a human can probably tell the difference between this sign being a little bit messed up and what it actually means, whereas an autonomous vehicle probably can't. But I think where the implications get interesting is where you start making decisions that have a negative impact on the human drivers. And that's a perfect segue, by the way, for today's topic, because we're talking about how robots can become more adaptable and more like human drivers so that they can make those human-like decisions.
Daniel: And I think that's something that'll probably help make autonomous driving become a more...
Farbod: A thing, like a real thing.
Daniel: A more near reality, I'd say, as opposed to a distant reality. So, let's jump into that.
Farbod: Absolutely.
Daniel: Go ahead.
Farbod: I was gonna say, again, we've shouted out George Mason University. I really wanna point out the professor's name. I feel like we often do, but sometimes we miss it. So, this is from Professor Xuesu Xiao. I hope I didn't mess that up. But he commonly goes by Professor XX.
Daniel: Which is super cool.
Farbod: It's super cool. Professor Dr. X, Professor X, X-Men. And the people that work in his lab are referred to as the X-Men. And the lab is called the RobotiXX, like with two X's.
Daniel: Two X's at the end?
Farbod: Yes, Robotics Lab. Super cool. Just wanted to point that out before we jump into the episode.
Daniel: Yeah, I mean, we constantly do a running poll here on the podcast of how well…
Farbod: Something is named or like acronyms.
Daniel: Yeah, things are named or the acronyms are. Let's just go ahead and…
Farbod: Eight and a half.
Daniel: Yeah, I'm gonna say an 8.9.
Farbod: Okay, you set the one up, may I respect that?
Daniel: For this incredible RobotiXX Lab. Love it. But again, just kind of go back to the context here, right? We wanna get to a world where we can trust robots to drive. People aren't great at driving, but right now in especially specific situations where the terrain is changing, humans are a little bit better at adapting to changes in terrain than robots are. So, identifying this as a gap where robots aren't proficient yet and understanding what humans are good at as a part of this aspect of driving and trying to teach that to robots is kind of the, the nexus of their, the research here. They're trying to take what humans are good at in driving, teach robots how to do that and see if they can do it just as good if not better than humans.
Farbod: Well, I think it's also worth talking about why that's the case, at least for most robotic applications. So, in this autonomous vehicle robotic segment, you have classical path planning, which means, hey robot, I want you to go from point one to point two, and it just finds the shortest path there and it goes, right? Whereas if you propose the same thing to a human being that's driving a car. And then from point one to point two, there's patches that have a little bit of gravel, a little bit of ice. They won't just cruise at the same speed for the entirety of it. They'll adjust how they're driving to accommodate the scenario that they're currently in and make sure that they get there efficiently, but also safely. Most robots just don't have that yet. They just have that one path planning, A to Z, how can I get there the fastest?
Daniel: And obviously, what does that result in? Right? If you've got a robot that's driving and doesn't know how to change their driving style, when the ground changes from smooth to rocky or smooth to slippery…
Farbod: Instability.
Daniel: It results in instability. It could mean the vehicle spins. It could be mean the vehicle rocks around a lot. It could mean something as bad as the vehicle crashes and flips over. Yeah. So, when you're trying to get to a realm where we can trust robots to drive just as well as we can and to drive better than we can, which is, I think it's the hope for autonomous driving, this is a huge hurdle that we need to be able to cross.
Farbod: Yeah, and I think one of the difficult things there is there are definitely laws and calculations you can embed that tell someone how to do this the right way, but the article makes a point of saying that a lot of this, the way that drivers learn is by feel. Like over time, you just kind of, or at least most drivers, you acquire this knowledge of how to drive during challenging scenarios.
Daniel: Not drivers from Maryland.
Farbod: Oh, oh, that's spicy take right there.
Daniel: Well, I'll just say when we were workshopping the title for this episode before we were shooting, we eventually landed on why this robot is a better driver than the average American because Americans aren't great drivers. The leading cause of death, non-natural cause of death for most Americans is car crashes, which makes sense, which is why we've got such a bad rap outside the US as one of the worst driving countries in the world. But being inside the US, I've got a certain hatred. I don't know why for Maryland drivers.
Farbod: Because we're Virginians. It's a natural way of things, you know? There are bad drivers, they cross state lines. We get upset during our commute. We naturally have to take it out on someone and it's just Marylanders.
Daniel: It's always the people with the Maryland plates doing the bad things.
Farbod: Always.
Daniel: I've got a Reddit comment to back myself up here. Confirmation bias.
Farbod: Share it.
Daniel: Apparently, Baltimore has the worst drivers in the US because average driver in Baltimore gets in a collision every 4.2 years and is 1.5 times as likely as anyone else in the country to get in a crash.
Farbod: That's crazy. Let's clip that. I want that to be shared across the internet and get the attention that we need for this spot.
Daniel: Vindicate ourselves as Virginia drivers, but I digress there, right? We need to get back to the topic, but like you're saying, right? Most human drivers have the ability to adapt and to understand what changes in terrain look like and how they feel, especially let's say like if you're driving a new vehicle for the first time, and you go over gravel versus pavement for the first time, you kind of feel the difference there. And then the next time you go through a similar situation, you're able to adequately, appropriately, adjust the speed and the style with which you drive to the terrain that you're driving over.
Farbod: Yeah. And, you know, this is, we've kind of laid out the problem, but it really becomes an issue where you're driving at the speeds that you would for a car, like, you know, 40, 50, 60 miles an hour. One of the reasons that Professor XX is so interested in this and the entire lab is interested in this is because they're fascinated in robotic applications for first responders, right? Where speed is essential. So, if you're gonna have an autonomous vehicle that can do first responder stuff, it needs to be very fast, agile, and also not tip over. Right? So, we've teased it enough, let's get into it. How do they go about this problem? What kind of things do they do? Again, speed is important, so instead of, I don't know, building a robot from the ground up, they were fast with their time, they were efficient with their time by just getting something off the shelf. And the robot that they got off the shelf is actually a pretty fast one. And in terms of dimensions and I think speed, it's about one eighth of what you would expect for a normal vehicle that's on the road. So, it was a good like scale comparison for how this application that they're eventually gonna develop would work in the real world on a car. The next step was applying some sort of a model that would help the robot make the right decisions. The problem was that there was nothing out there that could give you this kind of feedback of if you're on gravel adjust by X or Y or Z. If you're on grass adjust by X or Y or Z. So that's where the team kind of started doing what they were doing. And I'm gonna hand the mic over to you.
Daniel: Yeah, well I think you've teed up the problem well here. You've got this out of the box robot. You're ready to start teaching it how to drive better but you don't have a vast data set to tell you yet what good and bad driving looks like, especially for this robot size with this package and the speed. So, what they did is they just instrumented the heck out of this robot, right? They added a bunch of cameras and sensors that they said felt like it enabled the robot to be able to see. So, there was a visual sensors to understand what's going on in the terrain around it. And then also to feel the ground. And that's similar to the way that we don't actually feel the ground when we're driving, but we have some level of feedback through the steering wheel and through the inertia of the way that we're driving to feel what the road is doing back to the vehicle, how the road is reacting to the vehicle. That's something that we can perceive as we're part of the vehicle system when we're a driver. So, they added a bunch of inertial measurement sensors. I like to think of them like the canals inside your ear that help you understand balance and help the driver understand when we're being tilted one way or the other, when the G-forces change in angle or speed. They added a bunch of inertial sensors to this robot to try and understand again, when you're driving on different terrains at different speeds with different driving styles, with different levels of aggressiveness, what does it look like and what does it feel like for the robot to be driving over those different terrains. Now that they had the sensors, they're able to go out and try this, right? Go do this in the field, collect a bunch of data and then use that to train the model like you were saying, so they can teach the robot how to drive better.
Farbod: I was trying to think about how I would explain this inertial measurement sensors to our listeners. And like chime in if you don't agree. But I think the best example is like when you're slipping on ice and you start to feel your car tilt a little bit and you're slipping. Because the inertial measurements are angular velocity and acceleration. So as your car is starting to shift, you feel like, OK, now I should be turning this way to correct for it and maybe pump on the brakes a little bit or do whatever. And now I can like take that input and turn it into a meaningful output to self-correct again.
Daniel: And especially if you've never done that before, or you go around a really tight turn for the first time while you're driving, you definitely feel this, like almost like a pit in your stomach. You can feel when the inertial, when your angular acceleration changes. And that's exactly what they're trying to teach the robot to perceive as well, which is, how do we make sure that, especially to whatever extent that the terrain impacts that, how do we make sure that you're having a smooth and safe ride and you need to be able to feel whether the ride is smooth and safe to train the robot to do that?
Farbod: Because it's not enough for you to just know that you're on ice or you're on gravel. You have to have that extra layer of the field feedback to drive correctly on ice or on gravel or whatever.
Daniel: Yeah, for sure.
Farbod: So now that they have this robot and they've equipped it with all these sensors, what's the natural next step? They put it out into the wild and they kept driving it and tipping it over and having it go through all these trials and errors.
Daniel: They had a bunch of Maryland drivers get behind the wheel.
Farbod: They actually made a listing that said, hey, if you're a driver in Maryland, we would love to have you as a part of our team to drive this robot.
Daniel: Not really. They made the robots drive around a lot, right? They're trying to use all these new sensors to collect a bunch of data. They let it, like you said, crash into things. They let them flip over. They want to understand what good driving looks like and what bad driving looks like, and collect all the data to be able to classify the two. This provided a deep learning set for the robot to be able to learn from its own mistakes.
Farbod: Right, and it was the first data set of its kind that could be used to, again, teach it how to drive right. So now that you have this collection of inertial measurements and the context of where these incidents happen or what those measurements look like as you were going through these different areas, they could start embedding that into the path planning algorithm. And that's pretty much exactly what they did. So, what did that result in? Did it actually work?
Daniel: They made this awesome machine learning model, like we said, taking in the visual cues, being able to identify what type of terrain am I about to cross over? What type of terrain am I on right now? Also, the feel of the road, right? How is the inertial measurements of the car, what's the angular velocity, whether it is the accelerations of the car, what do they feel like? And then use that to adapt the driving strategy. So, to reduce the speed or to change the route, like you said, driving more intuitively based off of the conditions around it and how that might impact the safety of the ride, using that to modify the route planning and the driving style that the robot might have. And like you said, to be better than the, let's say, traditional route planning of a robot, which will drive as fast as it can from point A to point B to point C, now being able to modify that to control for safety.
Farbod: Yeah. And the numbers are pretty impressive. So, they noticed whether the robot was driving fully autonomously or with some human involvement, a 62% reduction in instability. So, you're cutting these instances of the robot falling over or slipping or whatever by more than over half of them. And they only sacrificed an average of 8.6% of speed. So, you're not slowing down a whole lot. You're not like, I don't know, crawling from point one to point two. But you're doing it much, much safer. And what's interesting about their platform, this model that they've come up with, is that, like most algorithms, as the unit drives, it will get better because it has more data to learn about. But if you think about this on a larger platform where there's multiple robots driving in different areas in different conditions, they can also start sharing that data with each other and learning it better over time. And to add to the cherry on top, Professor XX and team think that they can actually use this exact same model and apply it to aerial drones and marine drones as well. So, there's turbulence in the air.
Daniel: Yeah.
Farbod: How to accommodate that well and then waves and stuff in the water. Yeah.
Daniel: No, I think that's sweet. And one of the things that we kind of skipped over it, but I think it's important to mention here is the 62% improvement. I think it was 62, right? 62% instability reduction applied also when you had humans in the loop. So, it's not just in the situation where robots sucked so much more than robots driving that where we're able to make a 62% improvement. There are also to get a 62% improvement when humans were involved in the route planning and the steering. So again, this is an immediate take back that you can say like for my brother, who's a firefighter and has to drive a fire engine as fast as he can to get from the fire station to the site of an emergency, could a system like this be applied for him driving the fire truck and be able to again get an outsized return by only reducing the average speed by 8%, get a 62% reduction in unsafe incidents and stability in the ride. That sounds like a win-win without compromising too much on speed. And you can also do it with humans in the loop. So, I think that makes this more ripe for implementation right away as opposed to waiting completely until everything is completely autonomous on the road to be able to apply something like this. This works when humans are a part of the driving scenario as well.
Farbod: I totally agree. And we've done this so what? The next step that I would like to see from this team to add to the so what is testing it on an actual car or like something that could be used for this first responder solution that they have in mind. And then seeing how that data looks in comparison to what they've been able to accomplish with this like what one eighth of a model of a robot.
Daniel: Yeah, I agree, right? That for me the logical next step is to work your way up in scale until we get to a position where driving real life-size fire trucks around with an algorithm like this and can build a lot of trust in the way that it works.
Farbod: Yeah, Professor XX, if you're listening and the rest of the robotics lab, please give us another reason to cover Mason again and come up with the full-scale version.
Daniel: And also, if you are listening, we would love to come visit and film a bunch of cool content of what you guys are working on.
Farbod: Absolutely. Why don't you give us the TLDR, the juicy bits of the sauce about today's episode.
Daniel: Yeah, I'll wrap it up here. We think this robot can drive better than the average American. Americans are known as the worst drivers in the world. The leading cause of non-natural death for Americans is car crashes. As an American, I'm proud to say bad driving exists everywhere, not just in the US, but robots from George Mason University are here to fix bad driving no matter where it comes from. But robots aren't quite ready to replace drivers just yet. They especially get confused when moving from smooth to rough ground. Think about like from pavement to gravel. This causes slowdowns, this causes accidents when robots are driving. But the fix here, scientists from GMU are using cameras and sensors. They're teaching the robot to collect a bunch of data to see and feel the terrain around them and then how to handle that better through their driving. The cool outcome there, they've made robots that can drive 62% safer while only compromising around 8% of their speed. And they're trying to share these learnings with other robots to make a bunch of robots better drivers in every application on the road, in the air, in the sea. And that's why we say these robots can drive better than the average American.
Farbod: Yeah. And that we're right. They can't. We have the data. We can prove it.
Daniel: Yeah.
Farbod: I think that that's pretty much it, folks. Thank you so much for listening. And as always, we'll catch you in the next one.
Daniel: Peace.
As always, you can find these and other interesting & impactful engineering articles on Wevolver.com.
To learn more about this show, please visit our shows page. By following the page, you will get automatic updates by email when a new show is published. Be sure to give us a follow and review on Apple podcasts, Spotify, and most of your favorite podcast platforms!
--
The Next Byte: We're two engineers on a mission to simplify complex science & technology, making it easy to understand. In each episode of our show, we dive into world-changing tech (such as AI, robotics, 3D printing, IoT, & much more), all while keeping it entertaining & engaging along the way.