3 Modes of Operation

3 Modes of Operation

By Greg Bohl

In the near future, the leading connected cars will have 3 modes of operation; autonomous, aided, and manual.  Autonomous is easy to understand; the car drives itself with no human interaction.  Same goes for manual; the driver is in complete control with no or minimal intervention from the car.  But what about aided, or more commonly referred to as advanced driver assistance or adaptive (fill-in-the-blank)?  How will vehicles of tomorrow aid us in tasks and what kind of software will it take?

First where are cars today?  Well, like many technologies, it’s been bumpy.  Think about past adaptive transmission control systems where the algorithms couldn’t figure out multiple drivers.  However, for every bump, we have many successes.  There are many other newer adaptive systems released with every new model.  But they also have issues.  One key issue is the underlying assumption in most if not all of the advanced driver assistance and adaptive systems is that the driver is sophisticated.  But let’s be honest, in the U.S. at least, they are not. Most drivers will not take the time to read the owner’s manual or watch a training video.  Further, most drivers can’t or won’t take the time to understand the myriad of buttons and switches to configure, or turn on and off features that ultimately control the car they’re driving.  So how does the OEM get out of the quandary of providing fighter jet features that’s demanded by the consumer who really just has the skills to operate a hot air balloon?

First, the OEMs need to take a lesson from another connected device, our smartphones.  That is make it easy on the surface with the ability to make micro changes underneath.  Unless they’re producing a serious sports car, the OEMs should get rid of the buttons and switches and replace them with a simple rotary knob. Turned one way the car is fully manual, turned the other way the car uses all of the adaptive/ full advanced driver assistance features it has to offer. As you adjust the knob in-between, the car figures out what features to layer-in, notifying the driver on the infotainment system each time a functionally is added or turned on. Want to control the layering sequence? You have a configuration menu on the infotainment system.  But this is just the UX..

Integrate emotional sensors/cameras for the eyes and face.  A Machine Learning (ML) algorithm can determine the seven universal emotions produced by the human face.  Based on what emotion is produced the software can layer in the appropriate advanced driver assistance features for a given driver.  Think about it for a moment.  Start with the cars setting at 75% advanced driver assistance.  The car knows who the driver is by its ML driver identification algorithm.  If it layers in an assisted steering feature, but detects continued fear on the driver, it’s probably a bad idea for that driver, and re-sequences the advanced driver assistance features for that specific driver at the 75% setting.  imagine the relief from the driver after the fear/panic of using the the assisted steering feature the car turns it off and the infotainment displays a message stating its done so what to do to turn it back on.  All done without distracting the driver who’s already is a tense situation.  The remaining drivers for that specific car at the 75% setting start with the original layering sequence map until their pattern becomes unique based on their reactions.  Using this style of an approach takes the programming/settings work from the typical driver and places it within the car itself.  The car then becomes adaptive to the driver, not just the features themselves.  Now aided driving becomes just one more element in the immersive driver’s experience.