The AI will replace you and take your job…really?

I have been following the tech news that companies are laying off to invest in AI. Also, that people are worried about their job because the AI will replace them. Do they even understand what they are saying?

A while back I listened to an executive combine the words “machine learning” and “agile” a few dozen times about the future of a company. “With machine learning and agile we will make a better world.”

What happened to blockchain? A few years ago, everyone was saying that blockchain was the future. Then that NFTs were the future and people were paying thousands of dollars to own an NFT. I also remember when they came up with the word IoT and that this was the future.

I also remember about Google Glass and that it would be the future. I’ve never met anyone that owned one. What happened to the Metaverse and that this was the future?

But we are serious now. The AI got here and it will come for our jobs. And if you are an employer, then I guess that’s just an excuse to layoff people. Because they want to invest in AI.

Define “invest in AI”. Do you mean buy a ChatGPT license? The AI is not going to command itself. Or do you mean hire engineers to build AI products. And by AI products I mean LLMs. But most companies aren’t like OpenAI or Google with their army of PhDs from the top computer science schools. So how are these companies planning to build AI products? I wonder if they mean to invest in machine learning, but let’s call it AI, because from a corporate point of view it sounds cooler, they need the buzzword. But in a way you don’t need to come up with new machine learning algorithms. Perhaps build something on top of what AWS/GCP/Azure ML modules already have. Or do they mean to automate something? But for that you don’t need AI. Although, with Copilot you can ask it to write some automations in Power Automate, but you still need to correct it when it messes up. You don’t need to be a PhD to do that work though.

Or are these companies planning to create GPTs inside ChatGPT? You don’t need to be a ML engineer or PhD data scientist to make one.

Oh I need a plumber. Can the AI be a plumber? Or can the AI get up on my roof, cause I need something fixed there. What about my taxes. Even if an AI could do my taxes, do I want to give this information to an AI? Are they a CPA? Can AI talk to the humans to get the requirements for a report? But even if it generates an output, what would the (non-tech) humans do with this answer?

The AI will replace a game designer. But the AI was confused when I asked questions about C# and Unity. It could answer the questions separately, but had a hard time understanding integrations, since they could not read all the code and configurations of my environment. Also I wouldn’t feel comfortable giving an AI access to my whole computer.

The AI will replace accounting. I guess it could. I don’t know much about accounting, but if you mean that accountants take data, apply accounting concepts, and generate an output using software. I suppose an AI could do that. But do you want it to? Is that safe? Does it meet compliance? Don’t you have to completely change the world of accounting to let an AI take over?

This is similar to believing that autonomous vehicles will replace anytime soon what we currently have. Every few years I hear about an autonomous flying taxi company. Even Sebastian Thrun, who created the Google autonomous vehicle, and then a flying taxi startup that went bankrupt, could not figure that out. Probably they would need to change airspace as we know it. The same with the drone delivery pizza. Is the FAA going to let a bunch of drones fly all over?

Uber was onto something. They changed the business model of the taxi industry. Then they were investing heavily in AI (autonomous cars) until it killed a pedestrian. I was reading the news that last year the test driver was sentenced for it, and it said the National Transportation Safety Board concluded the crash was avoidable if the test driver had been alert to monitor the car’s performance.

What does this tell us about AI? When it screws up, it screws up bad. It needs a human so that the AI does not misbehave. In the case of Uber, if the AI needs a test driver all the time, then why not let that person drive? So what’s the point of having the AI in the first place. With LLMs, ChatGPT and co, you can’t just believe everything the computer will say. Trust and verify. Currently they are just a tool to help.

Whenever I hear or read about “AI is the future”, I take it with a grain of salt, the same as everytime they said in the past that something else was the future.