The Shift to a Zero Trust Society

Zero trust changed cybersecurity by replacing implicit trust with continuous verification. As AI accelerates misinformation and deepfakes, we need a zero trust approach to online trust and information.

March 3, 2026
/
8
min read

Image generated by DALL-E

For decades, cybersecurity was something left to tech people (like me). As AI becomes part of everyday life, cybersecurity must move to center stage. This isn’t the usual “use a password manager” advice, but rather a fundamental shift in how people interact and trust other people, software, and information.

Zero Trust Explained

To understand why I’ll need to get a little technical. Historically, companies protected their servers using firewalls. Think of a firewall security guard in the lobby of an office. To get past him, you need to show credentials. He would deny entry to bad actors and let the good ones in. Once you were in, you could walk around the office unencumbered. Likewise, once you were past the firewall you could access any of the servers behind it.

But attackers got more sophisticated. They found the weakest link principle applied; if they could break into one server, then they could soon access the rest. The physical equivalent is exemplified by the Antwerp diamond heist in 2003. Although Leonardo Notarbartolo wasn't actually a diamond dealer, he was able to rent an office in the building and by doing so got a safe deposit box in the vault. He could regularly come and go to do reconnaissance, ultimately allowing him to circumvent other security measures. Security experts, both physical and cyber, needed to improve their defenses.

The result was something called zero trust architecture. The servers behind the firewall don’t inherently trust each other; each request (which can happen multiple times a second) requires individual authentication and authorization. There is no assumed trust based on network location or prior access. To use the office analogy, not only did you have to show your ID to get past the security guard (firewall), but you also needed to tap your ID card for access to every room each time you entered. Each action required re-identification. Zero trust refers to starting from having no trust of anything and needing to build up trust on a per-action basis each time.

Trust in Society

Trust is a funny thing in our world. For millennia trust came from locality. When most people lived in the same small community for their entire lives, trust was born of necessity. If you sold bad goods to your neighbor, soon everyone in the town would know and the people you need to live with and conduct commerce with for the rest of your life wouldn’t trust you. Trust was a social necessity.

The model broke down as we gained in mobility. The traveling con men would come into town, prey upon the trusting nature of the townsfolk, and leave before the con is discovered. (For example, a faux music professor might sell the townsfolk instruments and uniforms, collect the money, and skip town while the townsfolk wait for trombones, cornets, and other instruments that will never arrive.) Trust soon got replaced with “trust but verify.”

Trust Issues at Scale

Modern life runs on default trust, with the occasional verification. AI will flip the ratio.

You get hundreds of emails a day, some from friends and colleagues, others for company mailings, and a few from scammers posing as Nigerian princes. You can generally spot the phishing emails and for the most part trust the other emails from names you recognize so you go about your day not thinking too much about it on a per email basis. It’s only if an email asks for something unusual, like a friend asking you to wire $10,000, might you have to expend effort to validate the legitimacy of the email.

Likewise, you trust some, but not all, of what you see in the media. You likely trust news sources like the BBC and The New York Times because you trust that they have journalistic standards. Independent writers like me are not beholden to such standards. As such, we should be more skeptical of what independent writers tell us. (It’s for this reason that I often provide references to original sources when appropriate.) We know there is misinformation out there, ranging from conspiracy theories to doctored images, and try to be on guard.

It takes some mental energy to do that, to pause and validate the information as true or false. This is why we use proxies and chains of trust. Again, we trust major, respected news sources. We may (or may not) trust our government’s information. We trust doctors with medical information since they have been trained and held to standards. We may know some friends are careful in their decision making and put more trust in them, while other friends might be known to misrepresent facts (intentionally or accidentally) and so we use caution when heeding their advice. The key is that the majority of the time we have implicit trust.

Consider your physical safety out in the real world. Living in New York City I pass hundreds or thousands of people a day. I am generally not worried about them attacking me. I’m not constantly on guard thinking any one of them might stab or shoot me. Sometimes we find ourselves in a less safe situation, perhaps being alone in a parking lot at night. In such a case, we are actively on guard. It takes more focus and effort when in such a state. That’s fine for short duration but if you had to do that every waking moment, it would be exhausting.

That’s what the next few years will be like. Right now, most of the content you get is trustworthy. Soon it won’t be.

The Asymmetric Cost of Information

To understand why, we need to recognize an asymmetry with false information. Suppose the medical community wants to test the efficacy of a drug. They will set up a double-blind study, run it for a period of time, often months or years, collect the data, write a paper, and then have it peer reviewed. That process takes years and can cost significant amounts of money. Now suppose someone, who does not care about the scientific method of accuracy of information, wants to claim that a drug causes people to get cancer. This huckster doesn’t bother with research or data analysis; he simply starts making false claims. Likewise, professional journalists do research and confirm information. People looking to get traffic and don’t have any professional journalistic standards will put up whatever gets clicks, whether it's accurate or not.

Historically, as long as most people stuck to the facts, the fraudsters were few voices and easy to ignore. Again, my worrying about my physical safety on the street is the exception, not the daily norm. But what if those illicit voices suddenly got louder?

You probably already suspect that social media is rife with false news.

That fraudster doesn’t need years to write up a paper; he can write his false claims in an afternoon and post them on a website. He can share it all over social media. We’ve already seen this happen. In a 2018 paper in Science, “The spread of true and false news online” (article link) researchers found, “False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth.” But it’s about to get significantly worse.

Writing such an article still takes time. Moreover, the quality issues of those claims are more evident. He doesn’t have data from studies to back them up, it’s usually more hearsay. Consider the famous tweet from the unreliable gossipmonger Nicki Minaj who wrote “My cousin in Trinidad won’t get the [COVID-19] vaccine cuz his friend got it & became impotent. His testicles became swollen” (source). She’s suggesting, with no evidence provided, that her friend actually was impotent and the vaccine was the cause. The lack of evidence is clear, although many people would lack the scientific capabilities to judge evidence correctly even if presented.

AI as a Multiplier

The problem isn’t just that AI can generate false information. It’s that it can generate credible-looking false information at an unprecedented scale.

In a matter of minutes, fraudsters could create a paper claiming the COVID-19 vaccine did just what Minaj said. Instead of an offhand remark like Minaj, this would be a fifteen-page paper. It could include completely made-up data like “78% of the 100 subjects experienced impotence.” Yes, someone could write that statement today, but writing fifteen pages of scientific-sounding research, along with charts and graphs takes time. And most people can’t write in the style of a science research paper.

But AI can. You can ask AI to write you such a paper. You can give it existing research papers, the type written by actual scientists and tell it to write you a similar version, but to use different, made-up data. You could even ask AI to create the data for you.

Why does this matter if AI can lie already? You may have come across the term “workslop” (here’s one of many articles about it). This is the term given to AI-generated output that looks valid on first glance, but upon closer inspection is inaccurate or downright useless. In other words, it’s now very easy to generate inaccurate information that isn’t immediately obviously fake. The early AI images of people with six fingers were easy to dismiss as fake; as images get better, this is harder to do. Likewise, generated fake news articles, research reports, and other content are getting harder and harder to recognize as fake without significant effort.

Consider this example from master con artist Donald Trump. He claimed to have documents which transferred control of his businesses to his children to avoid any conflict of interest. He could have just made that claim, but he “backed it up” by showing stacks of documents sitting on a table. Trained journalists observed that the papers were blank; they also noted the audience was mostly made up of sycophantic staffers. But to the people who don’t go deeper than a tweet, they saw an image crafted to support the false statement. AI can do this much better. Instead of blank documents, it could conjure up tens of thousands of documents that upon first inspection would seem to back up the false claims.

Right now, the internet is flooded with millions of false statements every year (yes, I’m being conservative). Now each of those statements can have with it reams of fictional “evidence” to make it look even more legitimate.

We also get this on a personal level. Phishing emails soon evolved from generic requests to something known as “spear phishing.” Instead of a generic email sent to tens of thousands of people, the spear phisher does research. For example, he might see the CEO post on social media that he’s at a conference in Phoenix, so the attacker sends a personalized email to the CEO’s assistant Chris (whose name he found on the company website), and start with “Hi Chris, I’m slammed with meetings here in Phoenix, can you send me ten $100 Amazon gift cards, I need them for a dinner event tonight?” That’s a more targeted email and takes a little more mental effort to uncover as fraud. You may have even read about the kidnapping scams where AI is used to generate the voice of the allegedly (but not really) kidnapped victim (see “AI-generated kidnapping scams are coming, FBI warns”).

In short, as AI improves, things will get harder. There will be more scam attempts, and the attempts will require more effort to uncover. AI can also be used to uncover them, but it’s a cat and mouse game, and historically the attackers are usually half a step ahead of the defenders.

As agentic AI explodes in 2026 (meaning autonomous AI agents), we’ll see it grow further. Agents need to be identified and authorized. A flaw in the system will allow agentic attackers to exploit victim agents at scales and speed unparalleled in the human world.

What can be done?

We went from trusted areas behind the firewall to a zero trust model, requiring authentication and authorization for each and every action. Nothing was true unless you verified it, or someone you verified and trusted verified it.

AI is requiring a similar shift in society’s online interactions. Everything needs to be considered false unless you verify it or one of your trusted sources does. As the cost of misinformation and convincing deception drops to near zero, default trust is a massive vulnerability.

Since most of us can’t verify everything ourselves, we need to rely on many sources. That source may be a government (your own or a foreign one). It could be a news outlet, government agency, industry watchdog, religious group, or a community you’re part of. We must also train people so that more people rely on reputable sources than unreliable ones.

Consider that when it comes to products, many people rely on third parties like Consumer Reports or Rotten Tomatoes. Note that the first is a service managed by experts while the second is a wisdom of the crowds type approach. You may choose to rely on one or both, or neither, perhaps preferring still other third parties for evaluations. The choice is yours.

Buying products and watching movies is not a daily activity, so you can invest time checking, but managing what you see in the news, email, and on social media is. Since you won’t have time to check every post and article against your sources by hand, we’ll need software to automate this. And again, agents will need to do this at an even larger scale. The problem will start to become widespread in 2026, but we may not see solutions for another year or two, unfortunately.

Throughout history we have relied on a chain of human trust. This includes everything from our social and professional networks to recommendations from friends and experts. To date it’s been done informally through speech and human connection. Very soon, we need to have software automate this at scale. This can be done with chains of trust, out-of-band verification, hardware attestation, and/or other forms of provenance. (Disclaimer, I have in the past and may in the future work at companies which offer such solutions. I have some patents in this area.) The solutions will be needed sooner than most people suspect.

By
Mark A. Herschberg
See also

Not Sure How to Ask about Corporate Culture during an Interview? Blame Me.

It’s critical to learn about corporate culture before you accept a job offer but it can be awkward to raise such questions. Learn what to ask and how to ask it to avoid landing yourself in a bad situation.

February 8, 2022
/
7
min read
Interviewing
Interviewing
Working Effectively
Working Effectively
Read full article

3 Simple Steps to Move Your Career Forward

Investing just a few hours per year will help you focus and advance in your career.

January 4, 2022
/
4
min read
Career Plan
Career Plan
Professional Development
Professional Development
Read full article

Why Private Groups Are Better for Growth

Groups with a high barrier to entry and high trust are often the most valuable groups to join.

October 26, 2021
/
4
min read
Networking
Networking
Events
Events
Read full article

The Career Toolkit shows you how to design and execute your personal plan to achieve the career you deserve.