I’ve been working in and around the AI field as a marketing advisor and writer for a few years now, and it’s time I answer the one question I always get.
As a writer, I’m regularly asked, “Aren’t you scared AI will steal your job?” by friends, family, strangers, you name it. The short answer is no; the long answer will be this article.
Another thing I say a lot is that we don’t have the type of Artificial Intelligence that people commonly envision; we have Large Language Models(LLMs).
These models are exceptionally good at making people feel like they are true AI, but really, LLMs are not learning or replicating human behavior in any way, shape or form.
This is not an apologist piece for AI. Despite my love and interest in the field, I despise how most businesses currently leverage it and I will share my reasoning later.
If anything, I’m writing this article so I can share a targeted and proud rebuttal next time the questions come up.
Let’s ride.
Artificial intelligence refers to the simulation of human behavior and reasoning by computer systems.
The elephant-sized caveat of all current AI products is that none inherently know nor do they learn what they need to do spontaneously without human training. And believe me, that’s a good thing.
Artificial intelligence, as portrayed in our favorite sci-fi movies, does not exist and is very far from existing. I don’t think I’ll see it in my lifetime, and I’m 35 years old.
The recent availability and democratization of AI tools like ChatGPT have made people think we are on some fast track to true AI in the next few years.
That’s a pretty flawed assumption that severely undermines the work and time that was required to achieve the current milestones.
AI typically falls into three categories. The first one, (ironically!) pegged as Narrow or Weak AI, includes basic execution stuff like facial recognition, voice assistants, and LLMs.
The reality is every AI tool currently on the market is in this category.
The next level up, General or Strong AI, is software that would closely replicate human capacities.
It’s still mostly theoretical, but there are companies claiming to be actively working on developing it. Despite rumors, it simply doesn’t exist at the moment.
For ChatGPT to be considered General AI, it would need to develop initiative and creativity to research and create content with a clear business goal. And that is not a problem we are close to solving.
The third and less talked about sphere is Superintelligent AI. It’s what you see in movies that supersede human intelligence or try to kill us, and it’s still so philosophical it’s currently fair to say it’ll never actually happen.
Large Language Models are a subset or building block of AI designed to process and generate (to a degree) human language.
Like, the hypothetical General AI I mentioned earlier would need an integrated LLM to interact with humans, but that doesn’t make LLMs full-blown artificial intelligence.
These models are trained on extremely large quantities of human data to create connections between words, allowing them to perform basic tasks like answering questions, (kinda) translating text, and creating (pretty bad) written content.
With the right data set, you could technically train them to understand any kind of data and spit it back at us.
LLMs are also entirely dependent on data being fed to them, unlike humans, who can have unique thoughts and create entirely new content with flair, subtleties, and daddy issues.
Since LLMs are Weak AI, they also don’t adapt their learning based on their previous experiences like we do. These programs are fundamentally task-specific and can’t really evolve past the individual requests we ask of them.
Another example I like to bring up is that LLMs don’t make links between data points unless told to.
If you say, "World War II ended in 1945," and separately add, "The United Nations was established in 1945," to an untrained LLM, it’ll never realize these two events are intricately linked unless it gets trained to make the connection by a human.
Look, I get it; LLMs are very impressive and entertaining. I fully understand why people may think tools like ChatGPT are full-blown General AI.
It’s by far the closest thing we’ve seen to this kind of technology, and it’s surely exciting. But just to be clear once again, we are also nowhere close to seeing any job get replaced fully by an LLM.
I’m not saying LLMs are bad tools; they’re amazing. I use them daily, and they’ve been a wonderful addition to my toolset in work and life.
They’re excellent productivity tools, and I truly think they can make people’s lives easier when used within their abilities.
That should be the business goal right now: making your employees' lives easier, and their output more significant with LLMs.
Like, damn, let’s cool down a little bit here. Sure, some people have already lost their jobs because they’ve been “replaced by AI,” but there’s no way that’s been the true reason behind the layoffs yet.
The companies using AI to justify layoffs would’ve blamed it on the economy or changing consumer trends if they didn’t have this new convenient excuse.
I’m gonna touch on one last subset of AI, Predictive models (I’m leaving out like 10-15 of them like, follow, and subscribe for part two).
I want to mention them because I have experience working with them, and I just think they’re really neat.
Unless you’re working in very specific tech fields, you’ll never hear of these models, yet they probably already handle some of your data.
If LLMs are translators and writers, predictive models are data scientists. However, just like LLMs, they’re not replacing that job anytime soon, as they rely on continuous human input.
While predictive models really do simplify the data analysis process by providing an output that would take weeks or months for a team of humans to match, they lack the foresight or understanding tied to the data they analyze without human oversight. All they do is analyze data based on the parameters provided, just like, way faster than a human ever could.
Common applications of these models are financial data to forecast a business's growth or advertising data to determine how likely an online ad is to be successful.
They’re also used in a lot of hospitals to triage patients and determine how long they’ll need to stay to better allocate beds. Don’t freak out, this works really well and is probably the best application of AI we have at the moment.
The speed at which this complex data is analyzed is indeed faster than what humans can ever achieve, but these models are nowhere near being able to work without human input.
This sounds like an easy question to answer, but it actually has several layers. The general answer is yes; they are and already do.
However, when companies tell us in thinly veiled words they lay off skilled employees like developers and writers because of the rise of LLMs, they are simply lying to sugarcoat their corporate greed.
In fact, you should be terrified that businesses rely on LLMs to write code. It’s a massive cybersecurity risk and tends to break the platforms the code is used for.
While there’s been a wave of businesses using LLMs to write their marketing content, it’s becoming pretty obvious to everyone involved that they don’t create usable content that resonates with people.
LLMs are amazing productivity tools, and that’s how they should be used and integrated by businesses, nothing else.
The other issue people don’t quite realize is that AI tools are far from stable at the moment.
It’s easy to miss unless you use LLMs daily, but ChatGPT and other AI tools have almost weekly full-blown major outages, rendering them unusable for hours at a time.
Some would say it’s no different than employees being sick, but it becomes a big issue when we rely on LLMs or predictive models to accomplish things humans can’t do in a certain timeframe.
I mean, yeah, sure, eventually, it could happen. But even anticipating the speed at which this will happen is almost impossible.
The term “Artificial Intelligence” was coined in 1955 and has seen several peaks and valleys of interest since then, most of which are forgettable (or not if you’re a huge nerd like me). We’ve already gone through two periods coined “AI Winters.”
There was excitement about AI in the seventies and eighties after its ideational debut. To put it simply, the first commercial forrays in the field sucked so bad that they scared investors away(when does that ever happen?) and relegated AI to obscure university research projects for years for lack of funding.
In fact, OpenAI and the other players in the field are potentially on the verge of another season change. While LLMs have definitely re-sparked the worldwide interest in AI, they might be stretched a little thin. As more companies heavily invest in AI, they might start to realize that LLMs aren’t quite ready for prime time.
Look, I get it, a lot of us, me included, want to see AI grow to new heights. It’s normal to get excited about LLMs and the current state of AI because this is some very cool stuff. The potential is just insane, but we have to be careful in at least two ways.
First, don’t get hoodwinked; we are not yet at the General AI stage. If you are reading this, that technology is far away, and any company claiming otherwise is marketing their product misleadingly.
Second, and this feeds into most people’s fears, maybe it’s good that we’re not there yet. Let’s just say most companies worldwide don’t seem to have super noble plans for AI.
I haven’t even covered half of the many subsets of AI in this article, but the ones like predictive AI and chatbots are definitely here to stay as they have solidly proven their potential return on investment for a few years now.
Believe me, I’m bummed about the chatbots too, and I can’t believe they provide ROI, but the numbers don’t lie; 62% of consumers prefer interacting with a chatbot rather than waiting for a human agent.
I’m not as sure about LLMs, though. I think after a few years of initial excitement, the criteria required to make it feel novel and impressive to customers might become bigger and bigger until they become impossible to achieve.
Despite all that, I’ll say that this is by far the most exciting chapter of the existence of AI and is absolutely cementing its place in our society. Some people might see it as terrifying, and I fully understand that, too. One thing is certain: we’re far past the point where you can ignore it.
Almost every job in the world will feel its impact, and if I were you, I’d learn as much as I can about AI so that impact happens on your terms as much as possible.