What does the future of AI hold in store?

This article was last updated 1 year ago.


Eight trends to keep an eye on this Artificial Intelligence Appreciation Day

On 16 July the world celebrates International Artificial Appreciation Day. In the previous century, science fiction often covered topics and inventions that are now closer to science fact, such as humanoid robots. In the 50s, artificial intelligence met both great successes, including algorithm development, and big failures, caused by compute power constraints. Fast forward to today:  AI is likely the topic of the year, with products such as ChatGPT hitting a billion users in less than a week and enterprises shifting their budgets to invest more. What is the future of AI going to look like? These trends offer a glimpse of what’s to come. 

The industrial revolution reloaded with AI

AI is the epicentre of a new technological revolution. Roughly two centuries ago, since the industrial revolution began in the late 1700s, people were exploring the possibility of using cars or trains. Nowadays, organisations are exploring ways to automate their tasks, optimise their operations and reduce costs. AI is changing people’s perspectives and views, challenging them to accept help more easily. Although it was often viewed with fear or scepticism in its early days, artificial intelligence is without a doubt benefitting from growing popularity. It brings to the world the idea that any pattern can be learned and decoded, giving people the freedom to creatively choose the automation that is implemented. The development path has accelerated because of different reasons, including the desire that many countries, including USA and China, have to position themselves as leaders. The role of open source is also noteworthy, as it has encouraged more adoption and contributions around the world.   

Just like the industrial revolution improved productivity by introducing the steam engine and mechanised factories, AI is enabling people to move away from repetitive tasks and focus on meaningful activities. Enterprises now have the chance to rethink their strategies, but also a new competition for leadership is arising on different markets.

From experimentation to ROI

AI is moving away from being just a fun technology to try out. Companies expect AI projects to deliver results, with clear performance expectations. PwC ran their 4th AI survey, which brought to light that 72% of respondents are able to assess and predict AI’s return on investment. Artificial intelligence is now becoming mature enough to be included in the roadmap of companies across various industries. Organisations are also gaining a better understanding of AI, as an innovative solution. Stakeholders are able to capture and value not only clear costs, such as hardware purchasing, but also soft gains such as better customer experience.

From another perspective, professionals working on artificial intelligence projects are more comfortable moving their work into production. Experimentation will always stay a crucial part of the projects’ lifecycle, but nowadays they have the tools, experience and data needed to move further. Model deployment, model monitoring or data drift are just some of the concepts that have been heavily approached lately. This comes with experience working on projects, but also with clarity on what the expected outcomes of each project are, what the performance metrics to measure their success are and what the next steps should be. 

This trend is going to evolve, challenging both professionals and companies to upgrade their AI strategy. The expectation to have a return on investment from projects is going to keep growing. So will the number of initiatives that scale up their footprint. There is going to be a decrease in models that get stuck in experimentation, which is likely to raise other challenges when running AI in production. On one hand, professionals will need to upskill to deploy models more easily, and on the other hand, enterprises will need to prepare to have monitoring and retraining capabilities.

The future is hybrid

Both public and private clouds have pros and cons for running AI. Whereas public clouds allow users to quickly get started, from a cost perspective, on-prem solutions seem more attractive. Hybrid clouds are the middle ground, which accelerates AI adoption, like broadband did with the internet.

Data is the heart of any AI project and having it spread across multiple clouds represents a challenge. Hybrid cloud scenarios solve this problem by providing flexibility and accessibility. Monolithic architectures need to be rethought, allowing infrastructure to use data regardless of where it lives. Hybrid cloud strategies are appealing to provide the data foundations to scale and operationalise AI, meaning the models the data feeds will be more accurate and enable more informed decision-making.

Hybrid cloud scenarios also enhance the optimisation of IT costs. With training and running models often considered hidden costs for the business, which are difficult to predict, IT leaders often avoid investing in AI. Hybrid clouds will support their AI investments, allowing companies to run the costly activities of the lifecycle in a controlled environment. In the future, more organisations are going to adopt this strategy, using hybrid clouds as a meaningful tool to drive their AI projects to production. It will also influence cloud providers to invest more in their hybrid cloud support, being more open to collaborating and providing the necessary tools to make it happen.

Open source AI

Going open source is a trend that changed the approach that companies have towards AI. It accelerated product adoption and encouraged enthusiasts to experiment more, but it also created a feedback loop where developers could quickly identify gaps. This led to projects such as HuggingFace, MLFlow or Kubeflow to quickly grow into mature solutions that are distributed, have enterprise support and are adopted in production environments.

This trend is not going to change. The future of AI is open source, with more projects following that path. It is going to lead to bigger communities that are supporting and contributing to products they believe in, as well as accelerated product adoption. Companies such as Canonical will grow their MLOps portfolio with other open source solutions, ensuring they are enterprise-grade. This includes products such as Charmed Kubeflow or Charmed MLFlow which are secured, can be upgraded and benefit from updates for AI projects.

One important aspect here is the open-sourcing of models, not tools. Long term, machine learning models are likely going to evolve similarly to tooling, with a growing community of contributors. They are going to develop organically, with people experimenting. HuggingFace started this trend, but the development of LLMs is going to accelerate the process.

Integrated ecosystem

Nowadays, the AI landscape is still quite disconnected. Building an environment still takes time, due to tooling and versioning incompatibilities. The future is going to lead to better collaboration between tool providers, open source or not, where the integration between them is not going to be a challenge anymore. They are going to partner in order to create solutions that cover the stages of the machine learning lifecycle, without trying to build everything from scratch. This change is going to have a huge impact on the experience professionals have, because, on one hand, they will be able to automate part of their work and spend less time on meaningless tasks.

Curious about Machine learning operations (MLOps) ecosystem?

Read our guide

Artificial General Intelligence (AGI): will they become real?

This is a question that many AI innovators are wondering.  AGI refers to highly autonomous systems or machines that possess the cognitive abilities of a human being and can perform any intellectual task that a human can do. It is a subcategory of AI, which blurs the lines between machine intelligence and human mind.  AGIs are a major benefit socially, economically and politically. They will transform the way that the world works, by scanning pre-existing information. They translate into a better ability to solve complex problems, based on a problem-solving approach. An ASI  is able to adapt depending on the surroundings and context, such that the number of tasks that it can perform is high. The potential is enormous, raising hopes to be able to solve illnesses such as cancer or resolve underlying issues such as overburdened infrastructure. However, they are still in the early stages, and there’s more groundwork to cover before they can reach the production stage.

Quantum computing is here

After the 2022 Nobel Prize in Physics was awarded to quantum-entanglement pioneers Alain Aspect, John Clauser, and Anton Zeilinger, trust in solutions that use quantum computing grew even more. McKinsey’s analysis in 2023 shows that the investments in quantum computing reached their highest annual level. In the future, the promise to solve very difficult problems related to computing will turn into reality. On one hand, there is more talent attracted to the topic, but on the other hand, organisations are more optimistic about the future of quantum computing.

Long term, quantum computing will become a commodity, similar to GPUs. However, they are going to be adopted at an enterprise level and the shift will not involve hardware alone. A new suite of software that is suitable for quantum computing is going to be developed. Companies like IBM are including quantum computing in their roadmaps, showing there’s industry investment and backing for this.

More specialists, less of a skill gap

Glassdoor reports more than 20k jobs only for data scientists, to which we can add roles related to data engineering, big data, machine learning or data analysis. This shows both a high demand on the market, but also a skill gap. Educational institutions are racing to fill this gap by including it in their curriculum. 

Read more about Data Science.

Go to our Medium publication

Across the globe, more bachelor and master programs are focusing on data science or machine learning, which will lead to a decline in the skill gap. There is going to be a growing number of specialists who deliberately choose a career in the field, having a deep understanding of machine learning, building model standards or MLOps principles. At the same time, another shift will appear since many professionals will choose specialising not only in AI, but also in a certain industry. They will turn into subject matter experts, rather than software developers with great AI skills. This will constitute the right mix to build successful AI projects.

It’s a wrap..the future of AI is bright

The future of AI looks promising. There is a lot going on at the moment, with the news always sharing a new project that could change the world. While the hype around specific projects is always going to pass, the AI boom accelerated developments across different areas of the landscape. We are seeing the emergence of an ecosystem where the hardware layer, software applications, professionals working on AI projects and enterprises are more synchronised. From quantum computing to open source machine learning models, we are going to see great advancements. 

Talk to us today

Interested in running Ubuntu in your organisation?

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

What is MLflow?

MLflow is an open source platform, used for managing machine learning workflows. It was launched back in 2018 and has grown in popularity ever since, reaching...

AI on-prem: what should you know?

Organisations are reshaping their digital strategies, and AI is at the heart of these changes, with many projects now ready to run in production. Enterprises...

Charmed Kubeflow vs Kubeflow

Why should you use an official distribution of Kubeflow? Kubeflow is an open source MLOps platform that is designed to enable organizations to scale their ML...