Why AI Isn’t (Yet) Ready to Take Your Job

Recent advances in LLMs helped AI make a quantum leap forward in capability, but many tasks in our jobs will be beyond its capabilities for years to come.

September 5, 2023
/
7
min read

I’ve written a few posts this year about AI and what it means for your job:

I stand by those posts. But I also want to offer another, also valid perspective, that AI isn’t yet ready to take everyone’s jobs, or even many people’s jobs.

I work as a CTO / CPO (chief technology officer / chief product officer). I’ve been building software for . . . well let’s just say, quite a while. I’m going to give an example using software, but this will apply to most functional areas. (And you don’t need any knowledge of software to follow along.)

The thing about software, and why it always seems more complicated than it feels like it should be, is the Pareto Principle, also known as the 80/20 Rule. It says that we spend most of our time on a few things and a small amount of our time on many things. 80% of the time you’re on 20% of the roads in your town. These are the ones you use to drive to work, the supermarket, and the gym. 20% of the time you’re on the other 80% of the roads, to run an errand or for some local event. 80% of the time you’re using 20% of the features in software. Think about Microsoft word. You change the font, make bulleted lists, spell check. How often do you use the references or mailings menu? (Did you even know those were menu options?) What about macros and keyboard customizations? Most of the time we use a few features. (Note: while it’s referred to as the 80/20 rule, it’s not literally 80% and 20% in all cases, more of a common bucket and uncommon bucket.)

Think about the tasks you perform. How much is a “standard” versus non-standard task? 80% of the time you’re putting overs on the TPS reports. However, 20% of the time you need to remember to include the appendix to the TPS report. You’ve probably figured out a way to optimize putting the cover on (so you don’t wind up hearing complaints from eight different bosses) but you may not be so efficient at the appendix addition since you don’t do it regularly.

Back in the late 1990s it would take a team of ten to twenty engineers to build an e-commerce website. Ten years after that it took about five. Today you can spin up an e-commerce website with no engineers at all. We’ve seen a 10-20x productivity improvement due to service automation (i.e., I can spin up servers in minutes in the cloud) and third-party libraries and tools (open source and commercial).

You may have heard of low-code and no-code tools. If you’ve ever made a basic website using drag and drops tools, you’ve used one. I could have used a low-code platform to create my free learning app Brain Bump. It could have been faster and cheaper for getting the first version out. But that system would not have been able to support all the features we have in the app today and the ones in the next release. There are limits to how much it can do. This is the point.

Image generated by Bing

Most software is about taking data in one spot, doing some calculations or transformations, and putting it elsewhere. Imagine a company says to an IT staffer, write some code to grab the inventory from each of ten warehouses and email the COO a report each week. This script could be created in an afternoon. It’s seen as wildly helpful. As time goes on the COO asks for improvements, like showing numbers in red if they’re below a certain threshold, and having the report emailed to more people. Then one day the report doesn’t go out. It seems one of the warehouses had a problem and can’t report numbers and the whole script stops. So now the IT staffer needs to make the script more robust just in case any warehouse is offline. He does, but now it gets the numbers for the other nine warehouses, but the COO doesn’t know just how off the total is. Next the IT guy updates the script to project the likely inventory for the missing warehouse (or rather warehouses, it needs to be robust if more than one goes offline) to project likely numbers based on historical data. Soon after a request comes in for the reports to be archived and accessible, after all these reports are now so important to the business. And of course, this system needs to be made redundant (run on two servers) in case one has a crash. This simple script created in a single afternoon is now getting very complex. Most of the time, it’s pulling numbers and sending an email. That’s the 80% case. The missing warehouse data, providing archived reports, and being redundant is the 20% case. Notice that most of the additional work was in the 20% bucket.

The same is true with large language models like ChatGPT. It was trained on an incredible amount of data and generates the response probabilistically. If you ask it to complete the sentence, “And they lived happily . . . “ it will generally say, “ever after.” In other words, it’s good at the common cases, but not so much at edge cases. But it’s not even good at the common cases.

Here’s an example. A client of mine (I consult as a fractional CTO / CPO) was asking me why can’t we just have AI scrape data for us. I tried to explain the complexities but then he showed me that ChatGPT gave him the code to do it. (Note: the code below has some field names changed in order to keep the specific client confidential. In the case below the request would be to get the names and average home price of every housing development in the US.)

Below is the code. Don't’ worry if you don’t know how to read code, I’ve bolded a few key lines.

import requests
from bs4 import BeautifulSoup
import pandas as pd

# Define the URL of the website to scrape

url = 'https://www.example.com/housing-developments'

# Send a GET request to the URL and get the HTML content
response = requests.get(url)
content = response.content

# Create a BeautifulSoup object to parse the HTML content
soup = BeautifulSoup(content, 'html.parser')

# Find all the articles about housing developments
articles = soup.find_all('article', class_='housing-development')

# Create an empty list to store the data
data = []

# Loop through each article and extract the relevant information
for article in articles:
   # Extract the title, link, development name, and average home cost
   
title = article.find('h2').text
  
 link = article.find('a')['href']
   
development_name = article.find('p', class_='development-name').text
   
average_home_cost = article.find('p', class_='average-home-cost').text

   # Append the data to the list
   data.append([title, link, development_name, average_home_cost])

# Create a Pandas DataFrame from the data
df = pd.DataFrame(data, columns=['Title', 'Link', 'Development name', 'Average Home Cost'])

# Print the DataFrame
print(df)

The code above, which took seconds for ChatGPT to create, will do just that. HOWEVER, it assumes a number of things. First, it assumes each housing development has a website. They don’t. Then it assumes the development’s websites are all formatted the same way, such that the title is the first h2 block on the page. It assumes data like the average home cost is on the page. It also assumes that, even if every housing development had such a website, that we could get such a list to feed to this program.(Astute programmers might notice something about the class objects, too.)

Parsing a single website is easy. Finding thousands of websites, creating custom code for each one, and normalizing the data is anything but easy. AI can automate the simple part of the task.

Going back to the hypothetical warehouse report above, AI can generate the code to get the data and generate an email. But then dealing with all those edge cases is where general solutions, like those provided by LLMs, won’t be sufficient. As the IT staffer meets the challenges, he can rely on AI to help him use a third-party library correctly, or to catch bugs, but he needs to create the code “strategy.”

In the legal space there was the embarrassing high-publicized incident in which a lawyer had ChatGPT generate a motion and it cited fake cases. It turned out ChatGPT made up the cases cited in the motion because that’s what ChatGPT does. The lawyer was doing the right thing but didn’t understand the limitations of the tool. A personal injury lawsuit follows a fairly standard pattern and having an LLM draft the first version is actually a good use of the tool (when proper data protections are in place). But what it can’t do is find and properly cite prior case law, and it won’t be able to do that any time soon. Other types of AI (not an LLM) can suggest cases for the lawyer to look at (think of it like an advanced search function), but the lawyer is needed to make the decision as to the legal strategy. AI makes typing up the motion more efficient, not unlike how resume templates made writing a resume easier and faster, but it’s nowhere close to higher order thinking.

Image generated by Bing

This is true across all disciplines. AI can automate rote tasks, but it doesn’t think and it’s still not that smart.

Consider graphic design. AI is fantastic at generating images. (You may have noticed that the image for this article was generated by AI.) When Pixar creates an animated film, it employs a number of artists to create the images. Consider the movie A Bug’s Life. The antagonists were grasshoppers. A quick search can tell you what a grasshopper looks like. Still, the designers created hundreds of draws (maybe thousands) of grasshoppers. They needed it to look like a grasshopper, but also be anthropomorphized; it needed to look mean and a little scary (they are the bad guys) but not too scary (young kids would see the movie). AI can create cartoon grasshoppers, but it was years of experience that let the Pixar creators get it just right. AI won’t be doing that part anytime soon, even as it will help them generate more images faster to help them through the revision process.

Images generated by Bing

Every year AI will get better. It will make more complex tasks automated and will eat away at the time needed for a given role. However, the higher order thinking in our jobs won’t be challenged anytime soon. If your job is as a paper pusher, figure out how to move up the food chain. For those whose jobs involve this higher-level work, make sure you understand how those tasks contribute to your organization's bottom line. That’s where you can continue to add value that will not be threatened by AI in the next few years.

By
Mark A. Herschberg
See also

Not Sure How to Ask about Corporate Culture during an Interview? Blame Me.

It’s critical to learn about corporate culture before you accept a job offer but it can be awkward to raise such questions. Learn what to ask and how to ask it to avoid landing yourself in a bad situation.

February 8, 2022
/
7
min read
Interviewing
Interviewing
Working Effectively
Working Effectively
Read full article

3 Simple Steps to Move Your Career Forward

Investing just a few hours per year will help you focus and advance in your career.

January 4, 2022
/
4
min read
Career Plan
Career Plan
Professional Development
Professional Development
Read full article

Why Private Groups Are Better for Growth

Groups with a high barrier to entry and high trust are often the most valuable groups to join.

October 26, 2021
/
4
min read
Networking
Networking
Events
Events
Read full article

The Career Toolkit shows you how to design and execute your personal plan to achieve the career you deserve.