AI Will Take Your Job, and What You Can Do About It

November 24, 2023

Opinion Piece

Originally published on LinkedIn here.

Eman Zerafa employee portrait

Article by Eman Zerafa

CTO at Cleverbit Software

Want to discuss anything software with the author?
Schedule a meeting here.

In this article:

We recently published an article in the Times of Malta, exploring various business applications of AI. The article received several comments that, I believe, reflect common views about AI:

Let’s address them.

Disclaimer:

While I’m not an AI expert, my background in software development, particularly in business applications, has deepened my interest in this field. At Cleverbit, we’re not just discussing the latest in AI; we’re actively experimenting with new tools and technologies in the field. This hands-on approach extends to our client projects, where we implement solutions that leverage AI, providing a real-world context to our exploration. Through practical involvement, along with insightful discussions with colleagues who are pursuing or have obtained master’s degrees in AI, I stay abreast of the latest developments in this rapidly advancing field.

When discussing the present capabilities of AI, I am referring to models like GPT-4 or others with comparable skills in certain areas. The difference in performance between ChatGPT 3.5 and 4 is significant. I suggest trying it out. Additionally, using well-thought-out prompts can enhance its effectiveness in tasks like reasoning.

One of the most popular phrases in 2023 must be “AI is not going to take your job, but someone using AI will.

Tell that to the layoffs [1] [2] [3] [4].

Maybe AI is simply used as an excuse in these cases, or maybe redundancy caused by AI only applies to certain jobs. Conceded.

McKinsey’s report from earlier this year, on how AI will affect the economy predicts a significant impact across various industries. The jobs least impacted, for obvious reasons, are those involving manual labour, such as agricultural roles.

So, let’s dive deeper into the facts.

Does AI simply predict the next word?

AI, and in particular modern Large Language Models like ChatGPT, do more than just predict the next word. While word prediction is a fundamental aspect, these models also understand context, interpret nuances, and generate coherent and relevant responses. But does it even matter? The technical details of how AI works are for the most part, and especially for such discussions, irrelevant.

The technical implementation and details in such discussions would only be relevant if we believe we’re reaching the technology’s limits and therefore, there would be no reason for speculating on the future. Whether we’ve reached the limits is quite controversial in it of itself, and most definitely debatable - Bill Gates however seems to think so.

A few years ago, experts thought neural networks, which are the basis for today’s Large Language Models like ChatGPT, weren’t very useful. But now, it’s clear they are much more powerful than we first thought. The thing is, we still don’t fully understand how these models work so well, which means there’s still a lot about them we don’t know yet.

But crucially, the inner workings of how it functions are irrelevant. What truly matters is the problems it addresses, not the methods it employs. While we might consider AI as ‘dumb’, in reality, all current non-AI software is likely even more limited. This isn’t widely disputed, nor is the usefulness of these technologies. In the end, as long as the results are beneficial, nobody is concerned with how they were achieved.

If you need to get from point A to point B, imagine choosing between two options: being carried by four skilled people on a palanquin or driving a car. The people carrying the palanquin are intelligent and can navigate and react to their surroundings, but this method is slow and not very efficient. On the other hand, a car may be a ‘dumb’ machine, yet it’s quicker, more comfortable, and much more efficient for the journey.

Calculators Did Not Replace Mathematicians

This argument, or similar ones, such as “the computer did not eradicate office work”, are frequently brought up, suggesting that as mathematicians live alongside calculators, so too will humans continue to be relevant to their jobs, using AI as a tool.

The most significant misunderstanding lies in the potential of AI. AI is undoubtedly a valuable tool that enhances productivity — it’s already proving its worth, and those not utilising it are missing vital opportunities. Traditional tools have always relied on human intelligence for programming and setup, confined to executing a set of predefined instructions. The key difference with AI, especially machine learning, lies in its ability to learn, evolve, and develop capabilities beyond what humans have initially programmed. This has been notably demonstrated by current Large Language Models (LLMs). The boundaries of AI’s capabilities remain unknown; there is uncertainty as to whether, given sufficient data and computational power, it might surpass human intelligence. While calculators are confined to specific tasks and cannot contribute to theoretical advancements, AI possesses the potential to do so, potentially redefining roles traditionally reserved for human intellect.

Another common viewpoint is that the problems humans solve are far more complex and could never be tackled by a machine lacking creativity.

Whether Large Language Models are creative is up for debate. But what’s important to note is that most everyday problems aren’t new. They are issues that have been solved many times before by lots of people and continue to be addressed over and over again.

In software development, developers often encounter problems that seem unique and specific to their situation. However, when viewed on a larger scale, most of these problems share many similarities. With sufficient data, it becomes quite straightforward to identify recurring patterns in these challenges.

Often, people come up with clever ways to solve problems. However, these solutions are constrained by the person’s own knowledge. The broader and deeper a person’s knowledge, the more effective and efficient their solutions tend to be. This is because solutions usually draw from a combination of similar experiences and scenarios they have encountered before.

Today’s models have the capability to do this to a certain degree. They can create original poetry, brainstorm ideas, and generate code to tackle basic tasks. Essentially generating combinations of data that fits their understanding of the task or problem at hand.

A big issue with today’s models is their limited understanding of broad context. For instance, an employee at a company knows much more about that company’s history and current situation, which helps them solve problems better. In the near future, it’s likely that these models will have access to all of a company’s data. OpenAI has recently launched a service for businesses that could include company emails and other information in its context. This would allow the model to spot patterns and recommend actions based on similar cases (in other companies) it has learned from. This approach could be more efficient, using solutions that have worked in the past.

Picture having a conversation with a chatbot that understands your company’s current situation (or the context for the problem you are trying to solve) even better than you do.

A quick note here is that current technology struggles more with hallucination (making stuff up) when the context is very large. Leading LLMs are pushing the boundaries on how large the context can be, with Anthropic recently releasing an update to their model that allows for a larger context and fewer errors when dealing with such large windows.

But are we close to reaching AGI (Artificial General Intelligence) – making human intelligence redundant?

The understanding of what AGI (Artificial General Intelligence) is varies, as there’s no consensus on its definition. A recent paper tried to outline stages and definitions of AGI, suggesting some direction, but it hasn’t provided a complete set of answers yet.

In the context of this article, let’s define AGI (Artificial General Intelligence) as the stage where AI reaches a level of intelligence comparable to human beings. While not overly specific, this definition is sufficiently clear for our purposes.

The latest AI technology can already surpass most humans in a wide range of tests [4] [5]. Microsoft, in a paper published earlier this year, claimed that GPT-4 has demonstrated “Sparks of AGI”. It’s important to remember that Microsoft is partnered with OpenAI, so they have a vested interest in promoting ChatGPT. Nonetheless, the findings in the paper are generally reproducible.

Several large corporations are concentrating on AGI or advancing AI in general. This includes major players like Microsoft with OpenAI, Google, Anthropic, and Elon Musk’s Grok. Additionally, companies such as Midjourney, Elevenlabs, and Runway AI are creating very specific models that are showing continual improvement.

This year has seen remarkable progress in AI. We’ve been treated to enhanced models and new features, with significant releases or major news emerging almost every week. It seems like everyone is working on AI, and notably, many are concentrating on similar structures and models. This gives AI research a level of focus and direction that it has never had before.

As highlighted in a previous article, predicting future developments isn’t our strong suit. However, there’s a growing consensus that AGI could be achieved within the next 5-10 years. This time frame appears to be narrowing; earlier this year, estimates leaned towards the longer end, but now, many experts, including Shane Legg from DeepMind, speculate that AGI could be within reach within this decade.

Why is reaching AGI of significance

Reaching AGI means computers would be as smart as people. Therefore, at that point, AI could improve itself. With enough computing power, it’s like having a large group of very fast-thinking and intelligent beings working on this. So, when AI gets to this level, it could theoretically become much smarter than us overnight. Whether we are close to achieving AGI or not hinges on numerous factors. Up to now, neural networks have shown better performance with increased size and higher quality training data. However, it’s speculative whether this trend will continue in the future. Some experts believe it will, while others argue that we’ve reached the limits of current technologies and need an entirely different approach to progress further.

But what can we do in the meantime?

I’ve had several discussions with various people about the potential paths current AI technology could take. As mentioned, there are a few scenarios: we might already be at the peak of what this technology can achieve, or perhaps reaching AGI is simply a matter of more computing power and data. Alternatively, we might need to address fundamental issues such as accuracy, which could take a significant amount of time to solve.

In all these scenarios, there seems to be a consensus that we’re at least five years away from significant changes and implications. The exciting aspect of this is that the ongoing progress in AI presents a wealth of opportunities for individuals and businesses. These advancements can be leveraged to enhance services, increase efficiency, or simply achieve more in various domains. This period of development in AI is not just about waiting for the next big leap; it’s a time ripe with potential for practical applications and innovations. These opportunities are widespread across various fields.

In a previous article, we’ve written about various applications of AI in the industry.

Software Developers, for example, are utilising AI to handle tedious coding tasks or to speed up their coding process with tools like GitHub Copilot. But the reality is that AI has potential benefits for all industries and businesses. Whether it’s sifting through documents, accelerating the writing process, or analysing data.

Some individuals might hesitate to use available AI tools due to concerns about over-reliance or for other reasons. However, choosing not to embrace new methods and technologies can be a significant mistake. It can result in slower, less efficient, and less effective work, ultimately leading to redundancy.

This can be one of the greatest advancements in technology.

Conclusion

It’s likely a matter of time before AI can undertake tasks currently requiring human effort, though this is a highly debated topic. Estimates vary, with some experts predicting this shift within the next five years, while others foresee a longer timeline, and a few doubt its feasibility. Regardless of the exact timeframe for achieving advanced general intelligence (AGI), the key approach is to adapt, embrace AI innovations, and stay ahead in this dynamic field. As things stand, and at least for the short-term, AI will make us more productive at our jobs. It’s a tool that most should be leveraging. Some jobs will see a dramatic shift in the very near future, others will be easier, better and more productive.

If you’re keen to learn how AI can be leveraged in your context, or wish to discuss any related themes, feel free to reach out to me. I’m always eager to engage in conversations about these developments.

Founded in 2016 and headquartered in Malta, Cleverbit Software is a prominent custom software development company, employing over 70 skilled professionals across Europe. Specialising in custom software for business efficiency, we cater to a diverse international clientele, including sectors in Malta, Luxembourg, Denmark, the United States, and the United Kingdom. Our commitment to delivering tailored, industry-specific software solutions makes us a trusted partner in driving business innovation and efficiency.

Would you like to discuss anything software?

Here's our email:
[email protected]

Here's our phone:
+356 2704 2495

or even easier,