Artificial intelligence technology has seen considerable growth in trucking in recent years, with more and more fleets adopting technology that uses machine learning to simplify or enhance certain aspects of the business. That includes some of the processes happening in the trucks themselves.
Jason and Matt are joined this week by Dr. Stefan Heck, CEO of AI camera company Nauto, who talks about how AI cameras in the cab can benefit fleets and drivers, from reducing crashes to improving overall driver safety.
Heck also explains how Nauto's system is designed to not be as intrusive on the privacy of drivers as much as a traditional driver-facing dash cam.
Contents of this video
00:00 Using artificial intelligence to help human drivers
01:51 Optional video recording
03:34 Collision reduction
08:04 Minimizing driver alerts
10:33 Privacy issues with driver facing cameras
13:44 How the artificial intelligence works
15:20 Driver coaching
Jason Cannon
This week's 10-44 is brought to you by Chevron Delo 600 ADF ultra low ash diesel engine oil. It's time to kick some ash.
Matt Cole
How AI tech can help predict and prevent crashes.
Jason
Hey, everybody, and welcome to the 10-44, a weekly webisode from the editors here at CCJ. I'm Jason Cannon and my co-host on the other side is Matt Cole. This week, we're going to talk about artificial intelligence technology, which is seeing considerable growth in trucking in recent years. With more and more fleets adopting technology that uses machine learning to simplify or even enhance certain aspects of their business.
Matt
That includes some of the processes happening in the trucks themselves. Some companies have been working to develop AI-driven technology that helps drivers remain safe out on the highways. Joining us this week on the 10-44 is Dr. Stefan Heck, CEO of Nauto, who has been working on such technologies for the better part of seven years, with the goal of not replacing the driver, but of supplementing the driver's skills.
Dr. Stefan Heck
I started Nauto about seven years ago now, 2015, really with a very simple vision, how do you use AI and computer vision to help human drivers detect risks, stay safe, get home safely at night? And we're using some of the same technologies and capabilities that you'd find in an autonomous vehicle, but the philosophy is very different, right? It's not about replacing the driver, because humans have some amazing skills, lots of experience, years and years of history, knowledge, context. So you want to keep that and add the capability of computer vision that never falls asleep, has 360 peripheral vision in a way that humans don't. So that's what we're about. Our goal is to... And give the driver as they're driving, the right data, the right information, the right feedback so they can take the right action before something bad happens. So it's really important.
Jason
Stefan says, while Nauto's technology is similar to how AI cameras operate, actually using the Nauto system for video recording is optional [00:02:00] for the driver or the fleet. It can be used to only give the driver realtime alerts if video isn't wanted.
Dr. Stefan Heck
A lot of people talk about cameras in this space now, and the image sensor we use for the computer vision is the same kind of image sensor that goes into a camera, but video for us is purely optional. You can run our system with just the realtime warnings and feedback, the advice to the driver with no video ever being recorded. And that's a distinction that I think is important to make, because it's very blurred out there. A lot of people had dash cameras, and then they've added a little bit of AI magic on the back end to it. So they'll capture video, and then run it through some AI, and do some analysis on it. We're the opposite. We're running everything real time through the AI in the truck as you're driving, because our goal is to tell you between two and five seconds before something goes wrong that you're running a risk.
And then whether or not you want a video of that event, that's up to the driver or up to the fleet. It can be helpful to have a video to exonerate yourself obviously, and to prove that it wasn't your fault. Because we generally find the vast majority of accidents and collisions that trucks get into are not actually the truck driver's fault. So they can be helpful, but we also understand the concern about you don't want interior recording of you picking your nose, because that's embarrassing and it's not really critical for the safety mission. So for us, what incidents or what kind of events are captured on video is a completely configurable optional choice.
Matt
When it comes to collision reduction, Stefan says, technologies like automatic emergency braking reduce collisions by around 20%. Nauto system has proven to reduce collisions by as much as 60%.
Dr. Stefan Heck
Really important in our approach is you got to look at the outside as many safety technologies do. Automatic emergency braking, forward collision warning, the obvious ones, right, don't rear end the vehicle in front of you. We also do look with the same technologies on the inside because it's well understood. When you are at fault for a collision, it's usually a human error, right? You judge the distance wrong, or the timing wrong, or you didn't see something. Now, the real risks actually are combinations of inside and outside, right? So if there's a pedestrian crossing and you didn't see the pedestrian, you didn't pay attention, because maybe you were sleepy, maybe you were looking down at your phone. That's the kind of things where AI can really help, because AI doesn't fall asleep, doesn't get distracted by texting. So it'll see the pedestrian and it'll see you didn't see the pedestrian, and then that's where we intervene.
And we generally get really positive feedback from drivers. You can see the drivers nodding, or you can see them wave and say, "Hey, thank you." Again, in the fleets that have chosen to capture video, that's where we see that. If there's no video on, you'd never see anything, you just hear the feedback. I've talked to several drivers that said, "Look, Nauto saved my life. I wouldn't be here today because I would've gotten into a major collision and I might not have survived." So our goal is really to give the driver that critical couple seconds of time to hit the brakes, swerve, change something about what they're doing, or where the truck's going. And the results are phenomenal. If you look at Insurance Highway Safety Institute data for automatic emergency braking, the best forward collision warning technology out there today, it's about 20% reduction in collisions, which is huge, right?
20% starts to get into the realm of the seatbelt, and the brake, other major safety innovations. Much bigger than airbags or backup cameras or ABS systems, which were really small, incremental. Also, good technologies, I don't mean to knock them, but they're kind of small incremental reductions in safety. And if you look at what we can do with a truck fleet, we can generally get you 60% reduction. So it's triple. And again, it's that magic combination of really good drivers who know how to be great. We're basically making sure they don't have blind spots, and we're using them to judge what should be done in the situation, but making sure they're aware of the risk and it's magic.
The other thing that is really unique about Nauto is how fast it happens. Because you're all you're doing is really changing driver awareness, you can surface those risks within the first day. And we see drivers change their driving behavior within 72 hours. And you can often reduce a significant portion of the risk in the first month. That's very different from traditional great safety innovations as well. But Smith System or video coaching, they take many, many months to get you safety improvements, right? So this is a bigger effect and faster with no privacy intrusion whatsoever.
Jason
Now for drivers in the cab, Nauto is designed to give as few alerts as possible and not distract the driver and only send alerts when it's necessary. We're going to hear more about that after a word from 10-44 sponsor, Chevron Lubricants.
Protecting your diesel engine and its after treatment system has traditionally been a double-edged sword. The same engine oil that is so essential to protecting your engine's internal parts is also responsible for 90% of the ash that is clogging up your DPF and upping your fuel and maintenance costs. Outdated industry thinking still sees a trade-off between engine and emission system protection, and Chevron was tired of it. So they spent a decade of R&D developing a no compromise formulation. Chevron Lubricants developed a new ultra low ash diesel engine oil that is specifically designed to combat DPF ash clogging. Delo 600 ADF with omnimax technology cuts sulfate ash by whopping 60%, which reduces the rate of DPF clogging and extends DPF service life by two-and-a-half times.
And just think what you can do with all the MPGs you're going to add from cutting your number of regens. But Delo 600 ADF isn't just about after treatment. It provides complete protection, extending drain intervals by preventing oil breakdown., Before you had to choose between protecting your engine or your after treatment system, and now you don't. 600 ADF from Delo with omnimax technology, it's time to kick some ash.
Dr. Stefan Heck
Our design goal, first of all, is always to be useful and relevant and tell you something you don't know, right? So beeping at you when you change lanes or if you're breaking in the vehicle and front of you came to a stop, not useful, just annoying, distracting, and eventually, you hate it. So our design goal is to intervene as sparsely, as rarely as possible. Only when you go, "Oh my God, I didn't see that," that's when first of all, you find it helpful, but secondly, you learn because you go, "Oh my god, I missed that." So our goal really is to give you, if you're an experienced driver, a couple interventions per day, many of which are going to be caused by mistakes that others have made. Somebody else runs a stop sign or cuts in front of you, and moments when there was a danger that you didn't see.
That's really the essence. And then our interventions take two forms. If it's an imminent threat like you're about to hit somebody or something, it's an all-out alarm, because in the acoustic psychophysics of how your ear works, loud alarms get the fastest response time. The only thing faster than that is if we were actually tapping the brake, because that immediately makes you focus forward. We're going to do that in our integrated programs. We're working with truck makers like Navistar to integrate, and also BrightDrop for light vans. And there, we'll be able to do that kind of full integration. Right now, auditory feedback, it's an alarm because that's what makes you pay attention the fastest. And in that situation where there's an imminent collision, literally every 10 milliseconds counts. And so, we're trying to basically get the alert down to 20, 30 milliseconds, and then it does take the driver about 0.7 seconds to react and respond and hit the brakes.
So we need to give you four or five seconds so that you still have enough time to then avoid the collision. Now, if it's a high risk, but it's not yet an imminent collision, pedestrians stepped off the sidewalk, but isn't actually in your path of travel yet. The light's turning red and you're looking down at a map or your GPS system, so you didn't see the light turn red. Then we used voice feedback. We worked a lot with an audio design firm so that it's really clear to hear. It's really clear in what you need to do. So it will say, pull over to use your phone, or slow down, leave more following distance, warning red light ahead so that it gives you a prescriptive guidance on what should you do to reduce that risk.
Matt
It's worth noting the drivers concerned about big brother watching them in the cab, Stefan says that unless the driver or fleet chooses to use video recording, the Nauto system is using image sensors only processing in real time.
Dr. Stefan Heck
They are image sensors. So like what you'd have in a camera, we use Sony image sensors. So it's like what's in your cell phone, but it's just the real time processing. The equivalent is what happens in a drone while it's flying. If you use a modern DGI drone, it's got this object detection. It knows when you're about to fly it into a tree or a wall and it stops and it backs off. You may or may not be recording that video, right? You might be just flying for fun, but it'll still use that image sensor to understand I'm about to run into something. And then just like on your drone, whether you want to record is a separate. So we have a button where the driver can actually say, I want to record this, and it'll go back and capture the previous couple seconds and then also what happens.
So if you get threatened by somebody or road rage, the driver can hit that button and say, I want that on video. But the rest of the time, it's just processing in real time. It's not recording, but it will look to the driver. It does look like a camera because it is an optical sensor that we're using to be able to see in color because you need to be able to see red lights and sign colors and so forth. So we use a color image sensor for that. Some things we do on the design side. So as I mentioned already, we don't ever take away control from the driver. I think that's fundamental. The creepiest kind of AI when you talk about RoboCop or Terminator is the AI's gone crazy and does stuff on its own, right? And that's the nightmare scenario. So we don't allow the AI to take control away and steer on your behalf or anything like that.
I think that's key. The second is it's really about privacy. None of us feel uncomfortable that my car's got a radar that tells me how far away am I from the vehicle head. And it's using the radar to hit the brakes or slow down if there's too close. In the same way, we're using the computer vision to say, hey, there's a pedestrian in front of you. Hit the brakes. That does not require... And that's really where the privacy elements becomes important that you record anything. And so there are competitive camera products. Have these drop-in capabilities where your supervisor can watch what you're doing in real time. I find that super creepy. That's George Orwell, right? Like somebody... Worst scenario, you don't even know that they're doing it. There's no indication for you, but they're watching and they may be watching you during your break or we don't allow that at all.
So our technology makes sure the driver knows when there's an event first, and then either the driver explicitly says, I want to record this, or they are notified, that the fleet has chosen, for example, actual collisions. It's typically what people want videos of for that exoneration purpose. And also if the driver's hurt, for example, they then know, hey, I got to call an ambulance to get help as quickly as possible. So that's the most common thing that we do wind up enabling recording for is actual collisions.
Jason
What AI in these scenarios is doing basically is the same things a human does, but without the human errors.
Dr. Stefan Heck
AI is a fancy word for really doing the kind of perception that humans do, right? What we're doing is calculating, okay, is there a vehicle or a pedestrian in front of you or a stop sign coming up? How far away is it? What's my speed of the vehicle and when am I going to get there? And if the pedestrian's crossing, are they going to get across the road before I've reached that spot? It's pretty simple math. Fancy term AI is really because I can do that with an automated system that perceives and identifies. And same thing on the inside. We're using AI to really translate from a visual image of who the person is, and we throw away that visual image and we extract where are you looking? Did you look at that stop sign? Did you look at that pedestrian? We also extract what's your state of alertness, right?
So if I start doing this yawning a little bit, my eyes may not be closed yet, but I'm definitely starting to fall asleep, right? And then of course, we intervene and say, pull over to take a break before you actually get to the state where you're like this. Because at that point, it's too late, right? If you're driving at 65 miles per hour with your eyes closed, you're immediately in acute danger. So we try to intervene before that. And the yawning and people shaking their head doing this, these are all signs that they're starting to feel sleepy. They may be aware of it and know they need to take a break or they may not. And that's where again, that highlighting for them, hey, drowsy and you should take a break, is really important.
Matt
In addition to the in-cab alerts that drivers get in real time while going down the road, Nauto also offers a driver coaching option that allows fleet managers to see certain events, to know where their drivers need extra training.
Dr. Stefan Heck
Coaching is an option on the system, because you can say, "Hey, I want to see near misses," for example, or I want to see the... Usually it's a very small percent of drivers that fall asleep, 1% or 2% of drivers have drowsiness issues. I want to just watch those drivers and I want to help them. And sometimes they may have a health problem that's causing them to fall asleep. And so, you can actually get them help and address that. So that's an option so that you can do that offline coaching. It's all triggered, though, by the realtime events. It's then uploaded and then can be used later for coaching. We do also on the fly as you're driving, we calculate a continuous risk score from zero to a 100 and the drivers can see that. And if it's enabled, the supervisor can see that as well.
That's kind of an aggregate measure. So for the driver at the end of your trip, for example, you can say, how'd I do? And it'll tell you, oh, that was an 85, so that's a pretty good trip. If it tells you that was a 20, then you know you probably need to work on that and you're at risk of not coming home at night if you keep that up. And that same thing can be... For large fleets, it's very helpful. We can aggregate that across so you can actually see, oh, my operation in California is higher risk than my operation in Nebraska. And then of course, for the total fleet as well, you can set targets and say, I want to be... Usually, once a fleet's in the 85 to 90 range, they're great. They'll have very low risk, but we have a lot of fleets start in the 40, 50 range and they'll have some drivers down in the 20. That's the areas you want to work on, because that's going to be a big part of your risk.
Jason
That's it for this week's 10-44. You can read more on ccjdigital.com, and as always, you can find the 10-44 each week on CCJ's YouTube channel. And if you've got questions, comments, criticisms or feedback, please hit us up at [email protected], or give us a call at 404-491-1380. Until next week, everybody stay safe.