To start, what is Cabin Monitoring Systems? Cabin Monitoring Systems (CMS) is a top-level definition primarily covering, but not limited to camera functions within the cabin (interior) of a vehicle. It can be divided into two major sub-categories, Driver Monitoring Systems (DMS) and Occupant Monitoring systems (OMS). Limited DMS/OMS functionality has been around for some time. Busses and other public transportation systems have been using security cameras to monitor people as well as the operators for years. There’s also several products today that simply use cell phone cameras or tablets in commercial trucks to track drivers/operators’ actions including emotions. Today, OEMs such as Subaru, Mercedes, and Cadillac all offer some form of DMS or OMS features on their production vehicles. So where is CMS headed? Is this just a form of vehicle based “big brother” or is it something else much greater?
To understand the direction and velocity of CMS, we need to first look at the technology changes in the last 3 years and then what can the changes do for both an OEM as well as the consumer. Technology for CMS is much more than just an updated/better camera. However, automotive grade cameras have come a long way. From 1.2 MP to now 8K resolution in less than 3 years. ISP (Image Signal Processing) is vastly improved and its now available on all SoCs (System on a Chip) produced by manufactures who want to sell into the automotive space. Infrared cameras, three years ago had a price point so high they would never show up in the volumes needed for vehicle production are now low cost. So, what about rest of the technology that processes and uses the camera images captured?
Streaming images needed. To get streaming video at proper rates and resolutions, several options are now available to the developer including GigE. This a vast improvement from just 3 years ago where an integrator or OEM had limited choices. If you look at current SoCs on the market or coming up, several are designed for 8 4K cameras. Yes, that’s right, 8! Additionally, all have their flavor of ISP available as well as optimization and performance management tools.
For software, things continue to move along at light speed for Machine Learning (ML) and Deep Learning (DL). Networks have become more optimized with better libraries creating increased speed and better efficiency, and new training data sets are coming on the market regularly. As an example, no longer do developers have to struggle with known training data issues around race and gender. Today, an entry level engineer can quickly bring up a solution with age, emotion, and gender detection. If they go beyond open source and buy a commercial training set, the accuracy quickly comes up to something reasonable. Face detection while mature for some time is now readily available in easy to implement solutions. Ask any software engineer who’s graduated in the last year if they studied computer vision. If the answer is “yes”, they will tell you how the face is made up of roughly 80 nodes +/- based on what method they used. They can then follow-up with the open source tools and what they can do with it.
Both images and other forms of data, once captured and processed can be temporarily or permanently stored locally in the vehicle, as well as sent to the cloud for additional insights. These actions are now seamless and can be done with vehicle telematics or just a simple WiFi dongle bought from any cellular phone provider.
So now that we have discussed low cost capable hardware and advanced software tools to create some really great things, what stands in our way?
First and foremost is portability. When a CMS feature is developed, the process is typically goes like this: The business side makes a feature selection. Then the technical team develops the high-level architecture. This is followed by both sides fiercely negotiating a hardware selection. After the SoC is selected, the remaining and bulk of the development cycle is done, but the work is also post SoC selection. Once an OEM or tier 1 commits to a platform, re-use is nearly impossible on another platform. So, if an OEM selects QUALCOMM for one model generation, then Texas Instruments for the following vehicle generation, everything post SoC selection must be redone. This is both costly and time consuming for everyone involved. OEMs need to demand standardization of tools down to the SoC level if they are going to compete in the market with edge computing functionality and rapidly evolving software.
Next is the development cycle itself. Today, it still takes a reasonable group of engineers a reasonable amount of time to develop a basic feature such as drowsiness detection. Many people don’t understand this when a single engineer fresh out of school can grab some open source software and have a basic model running in a few weeks and showing it running on YouTube. However, what’s not said is that quick model on YouTube is not accurate enough for anything beyond a narrow demo, is not running on a low cost SoC required by an OEM, nor is it optimized to run with other networks or software functions. It’s much more than just not automotive grade and safety compliant. However, the concept of rapid development is accurate and what we as an industry should target. The development cycle today is still just too long for embedded solutions when compared to the speed at which software evolves.
Now with the issues of embedded software development discussed, what can the consumer expect? Where is CMS headed in the next few years with this low-cost hardware, but still lumpy software development cycles? I’ll take a few lines to discuss top level use-cases.
What is the driver really doing and who cares?
The answer is everyone cares. Until L5 autonomous vehicles arrive at your doorstep in volume, a driver is operating and in control of a one to two-ton piece of marvelous machinery that can accelerate and stop in blinding speed, has the ability to crush other objects including, cars, buildings, and of course people in or outside the vehicle. Since the days brakes were first put on the original horseless carriage, everyone inherently knows safety is paramount to a vehicle’s operation. With DMS, we can:
- Validate driver engagement and qualify for NCAP 2020!
- Monitor driver distraction from cell phone use to eating a burger on the go.
- Monitor driver drowsiness or impairment and create vehicle actions based on driver feedback.
- Validate who is driving. This creates many other use-cases such as limited operations, alerts, secondary monitoring to other devices, or key functionality for UBI (User Based Insurance).
- Monitor emotions that can lead to impairment or road rage.
- Collect age and gender so the data becomes autotomized at the point of collection.
- No contact body temperature, heart rate, and cognitive load. In my other articles I’ve written about how the vehicle can become a mobile health monitoring station. DMS is the core system to make it happen!
Now, do we really want to know what’s going on in the rest of the vehicle cabin? After all we’re right there in the car.
Yes, yes, and, YES! How many parents get distracted with their baby in the back seat? Adjusting mirrors, look over their shoulder etc. What about pets? And what about an uninvited guest? Yikes! With OMS we can:
- Monitor the second row, or in an SUV, the third row as well using the infotainment display as the monitor for both rows.
- Monitor a child’s seat to validate if the child is breathing properly or is hot/cold, etc.
- Check the cabin prior to entry to see if anyone is inside the vehicle including hidden areas.
- Tie into the DMS to see if the vehicle is being car-jacked and have an alert sent out.
- Check the foot wells for objects left behind and alert the driver.
- Check car seats for children left behind or a pet
- Check for cell phones left behind.
In summary, With CMS, either DMS, OMS, or both the number of use-cases is very broad. The hardware is available, low cost, and ready. Software, while still a little rough is also ready to go. It’s just a matter of committing to an SoC and selecting the features/networks you want to implement. CMS is headed towards a permanent place in the car and will be required feature in vehicles with autonomous functionality. NCAP 2020 is just the beginning!