AI Flow Talks: the true capabilities and limitations of AI

Watch/listen to the podcast (in Finnish)!

Our AI Flow Talks podcast series shares real-life stories from the world of AI to make AI transformation easier for organisations. In this episode, we discuss the true capabilities of AI and its limitations - what AI can really do today, and where its limits still lie. Our guest is Esa Alhoniemi from Inoi, a data analytics specialist and AI expert. The discussion will be led by Tomi Leppälahti, AI Director at Fluentia.

What is artificial intelligence?

The episode starts with a discussion of what AI means today - and how broad the concept has become. Where once we talked about machine learning, nowadays almost any activity inferred from data is classified as AI. This broad definition is reflected both in regulation and in companies' AI strategies, but from a technical perspective, most models are still "refined machine learning solutions", not autonomous thinking systems.

The capabilities and limitations of AI from a data perspective

The strengths of generative AI lie in finding unexpected connections and in their ability to handle and model complex dependencies in data that humans can no longer perceive manually, but there are still many limitations. The discussion focuses in particular on the fact that models:

  • do not perceive large entities at once (the limitations of the context window)
  • do not understand abstractions or structures like humans do
  • don't know what they don't know - leading to hallucinations
  • live in a "timeless space" and do not always perceive the present without guidance
  • operate only on their data - and can reproduce false or skewed information.

Esa and Tomi also highlight the risks of synthetic data: if models are increasingly trained on data generated by AI itself, the quality and diversity of the whole data will be impoverished.

The role of humans as interpreters and critical thinkers of AI

One of the key themes of the episode is the role of the user as an interpreter and evaluator of AI results. The discussion emphasizes that humans have a responsibility to understand why a model gives a particular answer and how it was generated. This requires the ability to read AI as the result of analytics: what are its limitations, sources and potential biases. For example, when a model produces erroneous or inconsistent data, the user needs to understand when the AI is operating within the limits of its own training and data - and when it begins to produce results that can no longer be considered reliable analysis. The key here is not just detecting the error, but understanding the logic behind it.

Assessing the limits and performance of AI

Esa and Tomi stress that AI is not the answer to everything. For certain tasks, traditional machine learning models are more reliable and predictable. The discussion in the episode also raises the issue of measuring AI performance. The performance of models should be evaluated on the basis of repeatability, reliability and margin of error, rather than blindly relying on individual results. The performance of different models can be compared through practical tests, such as success rates in programming tasks or the dispersion of responses, to understand how well an AI really performs on a given task.

Competitiveness brings relevance to the AI revolution

Towards the end of the episode, the discussion turns to the adoption of AI in organizations and how technological change is affecting the culture of work and people's identity at work. New AI tools evoke a wide range of emotions - excitement, uncertainty, fear and even resistance. Esa stresses that training alone is not enough, organizations need to build a psychological safety around AI: a space where people dare to experiment, fail and openly discuss what AI will really change.

The debate stresses the importance of introducing AI not only for efficiency but also for competitiveness. It is important for organizations to show that AI is being used to strengthen their position in the market, not just to improve efficiency internally. When competitiveness and relevance go hand in hand, AI can move from being a tool to be defended against to a tool to be developed together.

Listen to tips from AI experts!

The episode is also available on Spotify

Share

Thinking about AI issues? Leave a message and let's explore together how and where to use AI.

Thank you for your message! We will be in touch soon.
Whoops! Something went wrong with the form submission.