AI will transform society, but society has shown that such transformation may not be as expected and the costs, unchecked, can be excessive.
About a month ago Jon Stewart did a segment on AI causing people to lose their jobs. He was against it. Well, actually his words were against it, but the fact is, deep down he’s for it, and you are, too, whether you know it or not.
The very fact that Jon Stewart can go on TV to talk about the current cutting-edge technology, large language models in AI, is because prior technology killed jobs. Lots of jobs Most jobs. Remember that for most of human history, 80-90% of people were farmers. The few who weren’t had professions like blacksmithing, tailoring, or other core professions. What they didn’t have were TV personalities, TV executives, or even TVs.
If you were born hundreds of years ago, chances are you would have been a farmer, too. You also would probably have died from an infection. But because we needed fewer farmers, thanks to scientific and technological progress, we got doctors and scientists who were able to discover, manufacture, and distribute cures for things like the plague and other infections. Scientific and technological innovation begets scientific and technological innovation. Generative AI is just the current state of the art (one of many, since we have plenty of other cutting-edge breakthroughs) leading the next cycle.
But this is not to say that everything will go smoothly. Lots of tech CEOs talk about the great positive impacts of AI. Unfortunately, this will take time. Consider the automobile; Carl Benz patented the motorized vehicle in 1886 (in Germany, for the record). Roughly fifteen years later there were only 8,000 cars in the US. It quickly grew to 500,000 cars by 1910. Still, that’s twenty-five years and even then, only about one-half of one percent of people in the US had a car. The first stop sign wasn’t employed until 1915 (although traffic lights at major intersections came slightly earlier). The point is, we had years, decades even, before the widespread adoption of cars by people and businesses. This gave us time to figure things out, both formal regulations and societal norms.
Social media, on the other hand, had negligible usage until 2008 when Facebook really started to grow (source). We went from a few million users to a billion in just four short years. Social media has subsequently been shown to cause cyberbullying, self-esteem issues, body image issues, depression, and a host of other mental health issues not to mention the wide scale spread of misinformation. We were well past a billion users before there was any data on its risks and negative impact, whereas with cars, people could see the risks early and take balanced preventative measures, like putting in stop signs or requiring drivers’ licenses, before it scaled up and made the problem worse. That’s not to say we got everything right; it wasn’t until the 1980s, thanks to people like Candy Lightner, founder of Mothers Against Drunk Driving, that we stood up to drunk driving.
Nuclear weapons date back to 1945. For most of the twentieth century only five countries (US, Russia, UK, France, China) had access to the weapon, with India joining the club a little later (source). This weapon created huge risk to the world, but those who had the power to use it also had controls in place. Their governments generally understood the potential negative consequences, at least for them, if not the world, and have held back on using them. There’s a fear that North Korea has less mature governance, but the reality is Kim Jong Un is a rational actor (further backed by rational actor China), who knows that a nuclear launch will invite nuclear retaliation which will make his post-war prospects (if he even survives) much worse.
But what if a terrorist cell got it? Kim Jong Un doesn’t want to die or have his country get decimated. What about people who believe that death in battle is better than their current options. They would launch a nuclear missile because the consequences, they themselves dying as a martyr, would be a positive outcome. This is game theory 101. If getting to the market faster with a car saves you money, even though you occasionally run over other people’s chickens on the road to the market (but suppose there’s no law against it early on), why not do it? There’s upside for you, and the downside is for someone else.
This same logic applies to AI. Any given person would use a tool that they think would create a more positive outcome. If you think the end justifies the means, then sharing fake information on social media is justified, as is creating it with AI. Even if you don’t agree with that, and want to be good and moral in your use of AI and other tools, can you fully appreciate the consequences? Most women I know in New York City need to see any photo I take at a social gathering and approve it before I can post it on social media. They’re not being malicious; they're just trying to look good on social media. But as everyone does this it biases online photos to look better than average (it’s only when the hair, makeup, dress, lighting, and angle all come together to make them look good that they approve the picture), causing body image issues for teenage girls. No one was trying to harm teenage girls, but that was an unexpected externality. This is one example but there are plenty more.
AI isn’t exactly a nuclear weapon, but it is a tool which can do harm. Unlike prior technologies that took years or decades, AI adoption is happening much faster. I’d point out that no one reads the safety warnings on products, but we don’t even really have safety warnings for AI because we don’t fully understand it yet. What would happen if in 1900 50% of Americans were given access to cars (along with the money and fuel infrastructure to use them) over a three-month period? No traffic laws, no licenses, no training, no regulations. How many deaths and other problems would we have had before we figured out the regulations we need?
Starfleet’s Prime Directive was implemented to protect less mature species from harming themselves with technology more advanced than they were. It’s not clear that the only threat is from external technology.
This isn’t really about AI. This is about any sufficiently impactful technology that grows faster than we can understand it. There are three factors at play. The first is the adoption rate, which is how quickly the technology is applied. The second is the impact radius: (size of impact) x (number of people impacted) x (duration of impact). (Note that impact radius is not a literal physical distance, but one in a conceptual space.)
The third is the learning curve, which is how quickly we can understand the impact. Note that the impact is real whether or not we understand it (where we are on the learning curve). People were suffering body image problems from social media as it became widespread, even if we weren’t yet aware of it.
If the impact radius is small, we can pilot a technology and limit the risks. The space shuttle was a great technology, but it came with risks. We did lose some astronauts (and money) with the loss of the Challenger and Columbia. While I don’t mean to downplay the loss of life, it was minimal, and undertaken by a limited number of people who understood those risks.
When the adoption rate exceeds the impact radius, especially when it’s faster than the speed at which we move up the learning curve, then we have excessive risk; we create effects faster than we can understand them, and that’s a recipe for disaster. It’s fine to move fast and break things when it's in your own home. It’s harder to justify that philosophy when you’re barreling down the street in a 1,200-pound car at 40mph (typical for a Model T). It’s harder still when it’s being done by half the neighborhood at once. Unfortunately, AI, like social media, isn’t confined to your own home where your actions on it only impact you.
When x-rays were first understood they seemed like science fiction. Shoe stores employed a shoe-fitting fluoroscope to let customers see how well the shoes fit by showing an x-ray of their feet in the shoes. When they came into use in the 1920s few people understood the impact of radiation on the human body. It wasn't until many years later, after many people had been exposed, that we began to understand the long-term impact. By then countless people had been overexposed to needless radiation.
As a point of contrast, the telephone grew in adoption in the US over the first half of the twentieth century. The phone itself, however, posed no serious risk so its impact radius was very small. (The only real risk was the eavesdropping by neighbors on the early party-lines when people didn’t understand how they worked.)
Looking back at the industrial revolution we moved fast and broke things in the name of innovation. We strip mined land, deforested mountains, and polluted the air and water. Famously, the Cuyahoga River caught fire in 1969; that’s right a river of water was on fire. (That’s not even the whole story, the river had caught on fire multiple times, but no one cared.) “Fundamentally this level of environmental degradation was accepted as a sign of success,” wrote David Newton in his book Chemistry of the Environment (p. 6). How much social degradation is acceptable, or even desirable, for progress?
AI is transformative in many ways, but until we fully understand its risks, we need to proceed with caution. The risk of harm itself isn’t a reason to not use it. We know cars kill many people each year, but feel the benefits outweigh the risks; that’s because we can measure the benefits and the risks. Our understanding of cars and their risks grew in line with its adoption. AI is less well understood. Yes, we need to begin to use it to understand those risks, but we don’t need to hand keys to 500,000 new, untrained drivers overnight. It’s always easier to loosen guardrails than to add them later (especially once wildly profitable companies can capture politics and lobby against such legislation as we’ve seen with the military-industrial complex, tobacco, oil, and Big Tech).
I’m all for innovation, but we need to proceed with caution. What would have happened if early in the industrial revolution we had said “innovate, but we’re going to limit how much damage to the environment that innovation can do?” During war there are limited resources, but society innovates. We regulate Wall Street, they complain the party is over, but innovate new ways to make record profits. Regulation doesn’t kill innovation, and at times even inspires it. We let the industrial revolution run untethered and are paying the price today. More recently we saw social media “transform” society, but it wasn’t always good. Let’s not repeat the same mistakes and let AI innovate unbounded, only to regulate it later. The genie does not easily fit back in the bottle. We study history to not repeat the mistakes of the past; let’s see how we fare on this test.
It’s critical to learn about corporate culture before you accept a job offer but it can be awkward to raise such questions. Learn what to ask and how to ask it to avoid landing yourself in a bad situation.
Investing just a few hours per year will help you focus and advance in your career.
Groups with a high barrier to entry and high trust are often the most valuable groups to join.