Speed Bumps on the Road to Driverless Cars, AI Doctors, and More

AI promises huge improvements in efficiency and safety for many tasks in our daily lives, but getting the technology to work is only half of the challenge.

April 30, 2024
/
7
min read

Image generated by Bing

In recent years the race has been on for driverless cars. While chatbots have been around nearly twenty years, with generative AI they are advancing rapidly. We can imagine more sophisticated services like teaching or getting medical or legal advice from AI. Science fiction writers imagined such a future even a hundred years ago and today it feels like we’re on the precipice. But there’s three more hurdles, one small and two big, before AI gets truly integrated into common daily aspects of our lives.

When I was taking AI classes at MIT in the 1990s my professor said experts estimated we were ten years away from AI taking off, but then stated we’ve been “just ten years away” since the 1960s. (Coincidentally, there is a similar pattern in my other field of study, physics, where fusion researchers would always say we’re just 10-20 years away from a breakthrough. AI hit a big inflection point with generative AI just a few years before fusion research finally crossed a milestone with net positive reactions.)

I’ve been skeptical of the “AI revolution” for this reason. I still think we’re on the first upward slope of the hype cycle and that while generative AI can do impressive things, the rate of “growth” (meaning sophistication) of subsequent generations of generative AI will start to slow. But let’s assume I'm wrong and that AI will be sufficiently outperforming us in key areas. There are three last challenges, only one of which is technical, needed before widespread adoption in certain areas.

The first is what Tyler Durden in the movie Fight Club called, “The illusion of safety.” Most people (around 80%) consider themselves above average drivers. By definition, only 50% of them should be, so we know people overrate their abilities. Many people won’t want to give up control, especially to a machine. Drivers will be more reticent when they realize that while a driver may always prioritize her life, or that of her kids, the car may not. Consider a car which needs to decide whether to drive into a wall, killing the mother and child in the car, or drive into a crowd, killing ten people. Most people would not drive themselves into a wall, but the car may simply calculate that two is less than ten. (Astute readers may recognize this as a modern version of the trolley problem. MIT’s Media Lab did an interesting experiment involving driverless car cases. The study has ended but you can see scenarios and how people responded, or even make your own decisions, at the Moral Machine.)

I suspect the concern will extend beyond cars (and other areas of direct control). On an airplane I have no influence on the decisions made in an emergency. The pilot will choose to fly us into a mountain or crowded field, and I don’t get a vote. Full autopilot may just hand that decision over to a machine, so my influence doesn’t change, but I suspect people may still be less comfortable knowing that a machine, and not a human, is making that decision.

The real question for any new technology is what it’s being compared to. A driverless car crash will be headline news while a human car crash doesn’t make the papers. If a plane crashes, it’s worldwide news; if the same number of people die from smoking that day, no one notices. Will people compare AI driving to the zero case (no crashes) or to the current baseline (which for the US is around 40,000 people annually)?

Let’s assume people get past this first hurdle, and we accept that our cars, planes, and other services are AI based. This may be because there’s always a human override option (i.e., grabbing the controls) or just that people all generally agree on a moral framework for such decisions to be made by machines.

This brings us to the second, and biggest problem, liability. Years ago, I asked the following question to panel made up of senior people at major rideshare companies, and big tech companies working on driverless cars:

A passenger gets in a driverless car of a rideshare service, company A. The car is manufactured by company B, using AI from company C, a map from company D, sensors from company E, and certain car parts from company F. There’s some bad weather and the car makes a decision which causes a crash; for example, the car swerves which takes it slightly off road to some dubious terrain, blowing out the tire (from company F), and causing a car crash. Who is liable?

I got blank stares. No one knew.

If it’s a human driver one of two entities are liable. Either the driver is liable (and covered by his insurance), or the car had a defect, and liability shifts to the company. While the car parts may come from different vendors, the car manufacturer (e.g., GM, Toyota) takes ultimate responsibility for all parts. They can do this because auto manufacturers can sufficiently evaluate ball bearings, brake pads, electrical systems, and other car components. I don’t believe they can sufficiently evaluate sensors, maps, and AI (which I don’t believe they are capable of developing in-house so must be sourced from third parties).

Who pays? You can say, “oh the driver’s insurance” or, “the car’s insurance” or, “vender X,” but that's just pushing the question back one step. We need to determine which party has responsibility; if no one is clearly responsible (meaning at-risk for liability) then no one will invest in ensuring safety.

The rideshare service doesn’t want to pay higher rates if it's the auto manufacturer’s problem; the auto manufacturer doesn’t want to pay higher rates if it's the AI’s problem, etc. Unless fault / liability / dollars can be correctly assigned, it remains a huge risk. The lawyers and finance folks (at any of those companies) are going to push back and say, “unless we can limit our risk, we can’t do this.” Of course, that may only happen after a few thousand driverless cars are already on the road and won’t happen until there’s a few contested accidents. (There’s nothing juries love more than listening to nerds explain highly complex, technical details about why it’s someone else’s fault.)

My guess is this second speed bump is going to delay driverless cars 5-15 years even once the technology itself works. This will be a common theme in technology, in that it won’t be the technological limitations preventing some service, but the legal / liability that holds it up.

This brings us to the final speed bump, security. Today we can “hack” people’s brains and get them to buy tacos or believe idiotic conspiracy theories; we can’t convince them to drive off a cliff. Our cars will not be so lucky. We’ve been remotely hacking cars for years (see Hackers Remotely Kill a Jeep on the Highway—With Me in It  which has a clickbaity title-–this was a planned hack to demonstrate what was possible—but it’s a good introduction to the risks; to my knowledge the referenced proposed legislation never passed). Whether an attacker drives a car off the cliff to kill the passengers or shuts down all bridges and tunnels to Manhattan (if you can hack one car, you can hack a few thousand at once with marginal extra effort), there’s a huge risk.

This is actually a combination of the first two. Giving control to AI is scary enough. Knowing that at any moment a Russian hacker can take over your car and demand money to not kill you (or even just to let you out of the car) scares people. (This isn’t exactly that scenario, but I can’t resist dropping in a Venture Brother’s clip.) It’s also one of liability, if the hackers get into the car's controls through, say, the in-car’s entertainment system, who is liable?

Most software we use is single vendor and so the vendor assumes the liability, which is usually limited and capped by the terms of service. Now we’re going to have complex, multi-vendor systems. Making it more complex is that these are software/hardware systems (and in this case one that can maim and kill with a single wrong decision). It’s often the “seams” that create vulnerabilities in such systems. Again, when liability isn’t clearly assigned, it will fall between the cracks.

When I give cybersecurity talks, I tell the audience hacking is as much a legal problem as technological. Companies never carried about data breaches because they faced no consequences for them. Once GDPR and CCPA came along it put an actual cost on the company, and they got serious about it. Until we assign clear legal liability to someone, no company will care about any of the risks above.

So far, I’ve just covered driverless cars, but it applies in similar ways to all fields. Let’s consider AI doctors. Most people would agree that human doctors would supervise, be it a diagnosis or surgery. Medicine isn’t always an exact science; removing an appendix that’s about to burst doesn’t leave much room for debate but when someone has multiple medical conditions and is on multiple treatment plans a course of action isn’t always so clear.  What happens if the doctor disagrees with the AI? Is either the doctor or the AI liable if it was the wrong decision and that was chosen?

AI may be designed to learn. This is especially important if there’s a new disease (paging Disease X) and we need to respond and adapt quickly. But what are the safeguards to prevent your HIPPA information from being used? Even if it doesn’t know your name, you might be the only 87-year-old in Oklahoma with a left-knee replacement from a specific vendor and Kawasaki Disease. Suddenly you’re no longer some anonymous data in the pool.

And let’s not forget AI, and tech in general already has some bias with race and other classes (see The Best Algorithm Struggle to Recognize Black Faces Equally), being an automatic soap dispenser being unable to detect the hand of an African-American patron, or just AI image generation bias. Now before we throw AI under the bus, remember that healthcare itself has a longstanding tradition of bias. Two wrongs don’t make a right, but two rights  . . . sends you in the wrong direction, c.f. the biased algorithm that overprescribed c-sections for Black/African American and Hispanic/Latino women.

As with cars, many fields will have similar issues with AI adoption. First, there’s a question of control / bias. Implicit in this question is the baseline to which it’s being compared. Second, there’s a question of liability since no system will be perfect. Finally, there’s the issue of cybersecurity, including safety and privacy.

While AI is promising (even if I’m right that it won’t be a full sea change anytime soon, it will still be impactful), it’s not full steam ahead. There are serious issues, which are often non-technical or not only technical, that need to be addressed before widespread adoption of Ai into existing systems. I’m an optimist about the future, but the path isn’t as short or straight as we might hope.

By
Mark A. Herschberg
See also

Not Sure How to Ask about Corporate Culture during an Interview? Blame Me.

It’s critical to learn about corporate culture before you accept a job offer but it can be awkward to raise such questions. Learn what to ask and how to ask it to avoid landing yourself in a bad situation.

February 8, 2022
/
7
min read
Interviewing
Interviewing
Working Effectively
Working Effectively
Read full article

3 Simple Steps to Move Your Career Forward

Investing just a few hours per year will help you focus and advance in your career.

January 4, 2022
/
4
min read
Career Plan
Career Plan
Professional Development
Professional Development
Read full article

Why Private Groups Are Better for Growth

Groups with a high barrier to entry and high trust are often the most valuable groups to join.

October 26, 2021
/
4
min read
Networking
Networking
Events
Events
Read full article

The Career Toolkit shows you how to design and execute your personal plan to achieve the career you deserve.