AI
AI That Works
Why smart strategy beats shiny tools every time.
By Yared Rodríguez, VP of Technology.
Image generated using Midjourney.
AI is everywhere. The latest McKinsey survey on the state of AI found that 78% of organizations were using AI in at least one business function, up from 72% in 2024 and 55% a year earlier. And the numbers continue to climb.
But not all of it is necessary in the way we may think. And not all of it works as expected.
We live in a moment where generative AI steals the spotlight. Its ability to create at speed is compelling. But speed is not the only metric that matters.
What matters is relevance. What matters is fit. What matters is outcomes.
It’s not just about plugging in AI. We should ask better questions: What problem are we solving? What kind of “intelligence” does it need? What’s the most effective way to deliver results, not just deliverables?
What happens when we get it wrong
We have seen it before: AI deployed as a proof of concept, but never scaled. Generative tools used for workflows that demand precision; projects that start with “let’s add AI” instead of “what is the smartest way to solve this?”
And let’s embrace it: everyone wants to be part of the hype. We are facing an AI-FOMO that is rushing solutions and missing the actual opportunities.
Intelligence is not in the model. It’s in the match. The match between problem and method. Constraint and capability.
The result of poor alignment? Bloated infrastructure. Teams in the dark. Trust eroded.
The intelligence spectrum.
Artificial intelligence is not just one thing. It is a toolbox of different minds, each with its own way of learning, reasoning and/or creating. Knowing which one to use is what separates hype from value.
Let’s look at four of the most common tools in this box, each applied to solutions we use every day.
Symbolic AI.
What is it? Structured, logic-based systems.
Think: If-this-then-that, legal workflows, compliance logic. It's applied in many conventional programs that approve or reject processes based on predefined rules, for example: insurance policies, healthcare procedures or administrative records.
Uses: Symbolic AI is great for traceability, auditability and rules that do not change overnight.
Machine learning (ML).
What is it? Pattern recognition from data.
Think: fraud detection, recommendations, demand forecasting. An example of a product that provides machine learning is Google Cloud Vertex AI.
Uses: Machine learning is great when applied to historical data to reveal insights that future behavior can follow.
Generative AI (GenAI).
What is it? A subfield of Machine Learning focused on content and creation at scale.
Think: Summaries, variations, drafts, creative augmentation. A well-known product offering GenAI is Midjourney.
Uses: GenAI is great when time-to-first-draft matters, or when inspiration needs a nudge.
Large Language Models (LLMs).
What is it? A specific application of GenAI to understand and generate human language
Think: Assistants, copilots, semantic search. Well-known products offering using LLMs are ChatGPT and Claude
Uses: LLMs are great when context matters and language is your interface.
Some cases need logic and traceability. Others are predictions from patterns. Sometimes we need speed and creativity.
These are only examples of the approaches most widely discussed and applied. There are others that solve different types of problems or complement these methods, such as Reinforcement Learning (decision-making through trial, error, and reward), Neuro Symbolic AI (combining statistical learning with explicit logic for explainability), Evolutionary or Genetic Algorithms (optimization inspired by biological evolution), and others. Each of these approaches has its strengths, and they can often be combined to match the shape of a problem. Ultimately, these approaches can be used to build intelligent agents designed to pursue specific goals with increasing autonomy.
Regardless of the model, there is something more important than what the system is: it’s how well it is set up to reason, to respond with relevance, and to produce value in context.
And that’s where many implementations fall short.
It’s not just about choosing the right kind of intelligence; it’s about optimizing how that intelligence infers.
The missing layer: inference optimization.
No matter how advanced the model, what truly determines its usefulness is how well it infers. In other words, how it interprets signals, generates meaning, and delivers actionable output. Inference is often described in artificial intelligence as the mechanical process of applying a trained model to new data. However, strictly speaking, inference is not merely the process, but the result of reasoning. While the field has adopted the term to refer to algorithmic execution, the essence of inference lies in reaching a meaningful and contextually grounded conclusion.
That distinction is important. Optimizing inference is not limited to improving algorithms or accelerating execution. It also involves designing and structuring inputs and broader contexts in ways that enhance the system’s ability to reason, incorporating human strategic and critical thinking into its reasoning processes, enabling feedback loops, designing relational depth, shaping interactions, and cultivating heuristic learning processes that help the system reach more meaningful conclusions. It even involves creating reciprocal flows of inference, not only from human to machine but also from machine to human.
You can have the smartest AI model. But if the reasoning layer is off, the results will still miss the mark.
That is why we should focus on inference optimization:
Feeding the right knowledge.
Structuring the reasoning process.
Reducing time-to-impact without increasing complexity.
Whether it is a legal triage system powered by symbolic logic, a personalization engine tuned with ML, or a GenAI tool behind guardrails and goals, what makes it work is not the buzzword: it’s the blueprint.
Extending intelligence beyond the model.
The word “intelligence” comes from the Latin “intelligere,” meaning to read between; to choose wisely and to discern. And that’s exactly what intelligent experiences should help people do. Not just accessing more data, but reaching clearer understanding. Not just acting faster, but deciding better.
AI is not just a model: it’s a multiplier, according to IBM Chairman and CEO Arvind Krishna. Something that expands our capacity to explore, compare, infer and, ultimately, discern what really matters.
“AI is the ultimate amplifier of human intelligence,” Krishna has stated. “It’s not about replacing humans, but augmenting their capabilities.”
An Intelligent Experience is not defined by how futuristic it feels but by how deeply it enhances our ability to navigate complexity and arrive at meaningful outcomes.
It means:
A product that adapts not just to behavior, but to intention.
A workflow that surfaces clarity, not more noise.
A system that learns with us, and sharpens the way we learn
A platform that evolves with context, not just content.
Real intelligence lies in discernment. And artificial intelligence, when embedded intentionally, can extend that discernment, accelerating our ability to choose well, reason clearly and act meaningfully.
This is how we design solutions with intelligence:
By embedding it in the real work people do.
By making complexity feel natural.
By giving teams more time to think, act and impact.
By bringing the future to the present.
Because the smartest system is not the one that amazes. It’s the one that amplifies our ability to know, decide and move forward.
And when that intelligence is paired with breakthrough creativity and exceptional craft, that’s when an experience becomes truly transformative.
What this looks like in the real world.
A useful way of approaching AI is seeing it as a helper in doing more of what AI itself is great at — with more clarity, flow and confidence.
In real organizations, intelligent experiences do not look like magic. They look like:
A sales tool that knows which content actually converts — like Seismic or Highspot.
An onboarding system that teaches faster and learns what works — like Apty’s Digital Adoption Platform (DAP).
A planning interface that reduces friction and increases foresight — like Hive.
A knowledge base that gives back more than it takes — like Moveworks Knowledge Studio.
These are not isolated AI moments. They are connected experience systems built to make every interaction smarter, more fluid and more valuable.
Studies published in 2023 by McKinsey predicted that, when used effectively, GenAI could unlock between $2.6 trillion and $4.4 trillion annually in value across industries, not by replacing human decision-making, but by multiplying its reach and speed.
Effectiveness will come with inference optimization, and when we combine that intelligence with storytelling, design and attention to the overall experience, we do not just optimize: we shape systems that feel intuitive, effective and distinctly human.