AI in practice: what we’ve learned so far

Blog | January 12, 2026

This article brings together what we’ve learned from that practical work. It’s a hands-on summary of recent developments in the AI space.

AI Blog hero image

AI has reached a strange phase. The technology is powerful, widely available, and improving fast, yet most AI initiatives still fail to deliver lasting value. Not loudly. Quietly. They stall, get sidelined, or remain stuck as experiments.

This is not a model problem. It’s a structural one. Over the past years, we’ve tested AI tools, evaluated agent frameworks, built internal AI policies, explored new interfaces, and worked hands-on with data-heavy systems.

This article brings together what we’ve learned from that practical work. It’s a hands-on summary of recent developments in the AI space, focused on what has actually been happening lately and what matters in real systems.

Across all of this work, one pattern keeps repeating:

  • AI fails when it’s treated as a feature.

  • It works only when it’s treated as part of the system.

Some of the linked articles are available only in Finnish. If you’d like to discuss any of these topics in more detail, or want an English walkthrough of a specific area, feel free to get in touch.

AI tools alone don’t solve real problems

Modern AI models are impressive on their own. They write code, summarize documents, and generate convincing answers. But capability alone does not translate into usefulness.

When we tested tools like Gemini CLI the conclusion was clear: raw access to a powerful model is not the same as reliable productivity. Without boundaries, expectations, and context, output quality varies wildly

The same pattern appeared when evaluating agent-based approaches. As we noted in our analysis of ChatGPT Agents, the promise of autonomy quickly collapses without a strict scope and clear responsibility:

AI systems do not fail because they are unintelligent. They fail because they are asked to operate without a clearly defined environment.

AI only becomes useful when it’s connected to real systems

A common assumption is that better models automatically lead to better results. In practice, most value comes from something else entirely.

AI starts to matter when it is connected to real systems, real data, and real permissions.

That’s why we’ve emphasized integration and context over raw intelligence. As described in our article on the Model Context Protocol, models become useful only when their interaction with services is explicit and controlled.

A model that “knows things” is interesting. A model that can safely operate inside an existing system creates value.

The same applies to user interfaces. Chat-based UIs can outperform traditional search and navigation, but only when the underlying systems are coherent. Otherwise, AI doesn’t simplify complexity; it exposes it.

AI-chip

Most AI risks come from normal usage, not failures

Many of the risks related to AI services are not caused by failures, but by how these services normally process and handle data. AI services process data in ways that are not always obvious. Inputs can be reused, transformed, or retained. Without clear rules, sensitive information can leak through everyday use, not through mistakes or crashes.

That’s why we’ve taken a hard look at data security in AI services. And it’s why we formalized internal AI usage guidelines early on. As we’ve stated, AI governance is not about restriction. It’s about making AI use safe and repeatable at scale.

If you don’t know:

  • what data AI can access,

  • where that data flows,

  • and who is accountable for its use,

then scaling AI will amplify risk faster than value.

Data readiness decides AI readiness

AI cannot compensate for missing or poorly structured data. Across our AI and data work, a consistent requirement keeps surfacing: data-driven features and AI use cases are only possible if the right data is being collected, understood, and maintained.

Whether AI can be used effectively depends on the current state of an organization’s systems, data, and practices. This connects directly to how organizations collaborate and share knowledge. Fragmented ownership and unclear data responsibilities block AI long before models enter the picture. AI does not create insight from nothing. It reflects the structure that already exists, or exposes when that structure is missing.

Productivity gains are real, but limited

AI can improve productivity, especially in knowledge work and software development. We’ve shown practical examples of this in our coding productivity article. But these gains are not unlimited. AI works best when outputs are reviewed, responsibility remains with people, and usage is limited to clearly defined contexts. When AI is pushed toward autonomous decision-making too early, the same failure patterns seen in agent experiments tend to repeat.

Used well, AI removes friction. Used without constraints, it introduces new kinds of uncertainty.

blob-gif

AI is an operating model decision

The most common AI failure mode is simple: AI is added after systems are built. Organizations that succeed tend to do the opposite. They treat AI as a design constraint early, especially in:

  • data architecture,

  • access control,

  • interface design,

  • governance.

This matches what we’ve consistently warned about in broader AI adoption discussions. When AI is treated as infrastructure rather than a feature, it stops being fragile. It becomes predictable. Sometimes even boring.

That’s usually a good sign.

TL;DR

AI does not fail because the technology isn’t ready. It fails because the surrounding systems aren’t. Treat AI as a tool, and you’ll get experiments. Treat it as part of your operating model, and you’ll get results. Everything else is noise.

Want to know more?

Our expertise is at your service. Whether you’re starting a new project or need assistance with an existing one, we’re here to help.

Say hello to get the ball rolling.