Despite fear mongering, AI isn’t going to kill us all, but in the near future it will cause a lot of societal disruption for which we are ill prepared.
In part one we looked at how AI will cause a huge disruption to the labor market. The size and speed at which it will take place is unprecedented in history. Fortunately, the negative impact can be minimized with the right educational support. Unfortunately, that requires society and governments to act.
In part two we look at the impact on society from an information standpoint. Most articles today about the future of AI point to the click-baity sci-fi nightmare scenarios of Hollywood. There have been articles on deep fakes (AI generates pictures, audio clips, and video clips) but they often don’t get into the long-term societal implications, only the short-term scams.
The Roman poet Virgil once wrote, “Fama, malum qua non aliud velocius ullum,” “‘Rumour, than whom no other evil thing is faster.’ The modern reinterpretation, “A lie can travel halfway around the world while the truth Is putting on its shoes” stems from Jonathan Swift who wrote, “Falsehood flies, and the Truth comes limping after it.”
We’ve had false information for years, but generative AI is the game changer here. Using these tools, we can now generate a nearly infinite number of false narratives at marginally no cost.
Decades of clickbait and a lack of hate speech policing by social media companies have shown that media which incites people gets the most attention and shares. Once you remove any restrictions of veracity, the sky's the limit for what you can do.
My graduate advisor, cryptographer Ron Rivest (the R in RCA), years ago invented a technique of sending information through an insecure channel without using encryption called chaffing and winnowing. In short, one party sends the secret message to the other, in a public channel that anyone can view, but does along with so many other messages that someone listening in can’t tell what is real and what is fake. (The messages are broken into smaller pieces so reconstructing the one true message gets statistically difficult.) Put another way, if 100 men stand up and shout, “I’m Spartacus” it’s a lot of effort to figure out which one is telling the truth.
We’ve had false information for years, but generative AI is the game changer here. Using these tools, we can now generate a nearly infinite number of false narratives at marginally no cost. Remember that the purpose of generative IA like ChatGPT is to create new content, correctness is not a constraint. You can ask it to create a fun conspiracy theory and then later switch the names to the politicians or political views you wish.
In the days of the USSR, they would famously doctor photos to support their false narrative but that took time and effort of skilled operatives. Today that technology has been democratized by generative AI, and the distribution democratized by social media. Consider those scam emails that would be recognizable for their bad English, tomorrow's scam emails will be as good as a native speaker, customized to the target, and generated in mere seconds.
What did the president say yesterday? It could be any one of a hundred things all going around the internet. The more outrageous, the more attention it will get and the more it will drown out the mundane truth.
if 100 men stand up and shout, “I’m Spartacus” it’s a lot of effort to figure out which one is telling the truth.
You may be thinking but surely the NY Times and other media would follow journalistic standards of only posting the truth. I believe they would. Unfortunately, a smaller and smaller percentage of the world gets their news from sources which have such guidelines in place. Consider this recent BBC article on the false information Russia has put out on social media about Ukraine; many lies received millions of views. Many follow social media feeds and @RandomInternetFeedYouTrustForNoSpecificReason is probably going to post the one more likely to get clicks and shares, which may not be the truth.
You’ve probably heard some of this before, but this is honestly the tip of the iceberg. What happens when we turn up the dial a million-fold or more?
Someone can manipulate the market easily. Wait until the CEO of a public company is known to be giving a speech for the next hour or otherwise hard to reach and put out a fake video announcing market moving news. Want to piss off the food processing plant that fired you? Ask an LLM to draft a press release announcing a food recall for their products. Any person with basic access to the internet can now attack any organization. One truth, versus a thousand lies.
On a more personal level, there have already been cases of kidnapping scams who use AI to fake the voice of the victim. Want to get back at someone who upset you? You can deep fake a voicemail from her boss saying she’s fired. She might find out the truth the next day, but not before she leaves a voicemail for her boss telling him what a jerk he is—a non-fake voicemail she can’t take back. Send someone a video of their house on fire and say, “I google the address and think this is your home, you need to hurry home.” Anyone on the planet can cause stress and anxiety for anyone else.
(For those who take umbrage at my listing out threats, recognize that professional hackers and disinformation specialists already spend hundreds of hours a year thinking about doing this. They’ve already thought up everything I’m listing and more.)
We know nation states have tried to sow division in the US and elsewhere. . . . Their ability to do so has just increased 1000x. More importantly, it’s no longer confined to large nation states like Russia and China. Small nations, and non-state actors, and rogue militia groups can, for a modest sum, do the same.
Some, like the kidnapping scam, can be done for profit, financial or emotional. Ones that tricking people into believing that their house is on fire can be done for fun. It might not be your type of fun but “some men just want to watch the world burn.” This isn’t just in the DC universe, it’s one of the motivations security professionals use when classifying by cyber attackers.
We know nation states have tried to sow division in the US and elsewhere. (To be fair, the US was doing this long before the internet, so the US doesn’t have a moral high ground with respect to this attack.) Their ability to do so has just increased 1000x. More importantly, it’s no longer confined to large nation states like Russia and China. Small nations, and non-state actors, and rogue militia groups can, for a modest sum, do the same. You may not have time to do this, but some unemployed angry, racist white guy does; this is the same guy posting fifty times a day on Twitter and leaving threatening voicemails for OB-GYNs. He just got 1000x more productive.
They don’t even need people to fall for it. They just need to create enough noise and friction to add a tax on daily life. Imagine what your life would be like if you had to start questioning everything around you. We walk into a store and make a purchase. What if we had to check: Are the directions to the store real or a scam? Is this a real store or a front? Is that really the product I want or a fake? Suddenly just living your life becomes exhausting. This is what will happen to us online where a significant number of the emails we get and things we see online require us to repeatedly evaluate the legitimacy. A tsunami of misinformation can do that, whether it’s generally aimed at society or targeted to individuals.
Two hundred years ago we didn’t have a health department. If the local tavern made customers sick, they’d figure out the source and that tavern would go out of business when the townspeople stopped patronizing it. The long-term relationships, a byproduct of limited geographic access, led to the importance of honesty and integrity in transactions. As people became more mobile the balance shifted. Con men could move from town to town, taking the townspeople's money but leaving their own sullied reputation behind. As we have moved to national and even pan-national relationships we interact, directly and indirectly, with strangers. We connect with, listen to, trust, and transact with electronic accounts and usernames of people we’ve never met in person; people who can shift names and identities online easily. This affords them the opportunity to convert their brand into your money (or attention and mindshare) through deceit and then setting up shop under a new name once discovered.
They just need to create enough noise and friction to add a tax on daily life. Imagine what your life would be like if you had to start questioning everything around you.
eBay’s solution is a seller reputation. If you tried to open a new account and sell something for a large sum of money, eBay would flag it. Instead, you need to build up a history of valid transactions over time, effectively building your “credit.” The internet as a whole has no such system. In theory, the number of followers might be a sign as people vote with their feet but the ability to generate bots is easy, and with LLMs the ability of those bots to look more human by turning out human-like content is easier. Number of followers is no longer a valid signal (savvier people have felt it hasn’t been a valid signal for a while).
We unfortunately need to reset how we trust. In cybersecurity we used to have a perimeter, often demarcated by a firewall. Things outside the firewall had to be checked for viruses and other threats, while what was inside the firewall, the corporate servers, were deemed safe. Advanced cyberattacks have caused a shift to a zero-trust approach. Now we assume every other system, even corporate servers behind the firewall, may be a threat. The other corporate server may have been compromised in the last few minutes. Each request, even if from another internal server, must be checked and validated. There is a cost to this, but faster computer power has helped minimize its effect.
We as a society will need to move to more of a zero-trust model. We need to evaluate each piece of information as possibly invalid. That additional tax of validating each piece of information comes at the expense of the ability to think about something else.
Likewise, we as a society will need to move to more of a zero-trust model. We need to evaluate each piece of information as possibly invalid. That additional tax of validating each piece of information comes at the expense of the ability to think about something else. Unfortunately, Moore’s Law applies to silicon chips, not carbon brains. We’re going to need AI assistance to help us parse the information we receive, to significantly lower the cost. Until that is in place, we risk getting overwhelmed. Long before the robots turn us into human batteries (or whatever your preferred robot uprising endgame is) humans themselves, leveraging AI, will bury us under a mountain of misinformation, causing emotional and economic taxation and distress writ large.
This will have a direct economic cost from reduced productivity. It will also have a societal cost. We can’t have a healthy, respectful political debate if we are living in two different worlds with two different facts. As there will soon be more false information than true information on the internet, we may find our communities shattered into dozens of different camps. To be clear, everyone still is well intended, wanting what is best for the community, but if you believe The Monsters are Due on Maple Street, you’re going to lock your doors and get your guns while your neighbors, who just sees a run of the mill blackout will argue for a very different approach. An army can’t fight any enemy if it can’t agree on where the enemy is. Society can’t address problems if they can’t agree on what those problems are.
The solution needed here is harder than for the labor market disruption. It begins with better education of students as to sources and veracity. Unfortunately, US society has already devolved into groups believing opinions and facts are interchangeable. Even if we can regain that distinction, we need tooling to help reduce the mental tax I described above. Consumer Reports and similar magazines are trusted sources that help us evaluate products. We need trust sources to evaluate information and then chains of trust. I might read an article from a reporter I’ve never heard of because the BBC said it trusts this reporter and puts the article on their website. I trust the BBC, and the BBC trusts her. I might even then be willing to read an article on her social media because the BBC trusts her. We need to make that chain of trust instant and transparent. The combination of trusted authorities (there’s no limit to how many there can be) and the ability to chain that trust can restore some usability back to the information on the internet. Like noise canceling headphones in a cacophonous environment, it can allow us to cut through the noise and focus on what we need.
This is all easier said than done. My previous work in cybersecurity and some of my current projects are starting steps down this path. We still have a long way to go before any of this is in place. It’s not the first time technology has created society disruption and it won’t be the last. As with prior changes, how quickly we can reset how we function directly impacts the cost of change, and speed to the next economic boom.
It’s critical to learn about corporate culture before you accept a job offer but it can be awkward to raise such questions. Learn what to ask and how to ask it to avoid landing yourself in a bad situation.