AI-Assisted Delivery: What Clients Need to Know?
Integrating AI in outsourcing companies is like assembling a complex jigsaw puzzle. Each piece is an answer to a distinct question. How do you bring AI into large, distributed teams? How do you balance speed with long-term quality? Which tools actually make a difference in practice?
In September’s issue of Techtonic, we’re exploring the pros and cons of AI-assisted delivery with Klika’s Muamer Ribica (VP, Engineering), who shared his perspective on the opportunities, risks, and the future of AI in outsourcing.
Give us a quick overview of what AI-assisted delivery is and how it can be integrated into large outsourcing teams that are not traditional product teams.
AI-assisted delivery is about using LLM-powered tools as a companion in your day-to-day work. Think of it like pairing with a programming buddy: the companion can be insightful but has a short memory. During the collaboration, it’s insightful, optimistic, and agreeable, but it has a short memory span. All of these traits can be beneficial or challenging depending on how you use them.
We’re currently working in four main modes of AI-assisted delivery:
Simple chat involves copying and pasting from ChatGPT or similar tools to explore ideas, generate snippets, or draft content.
Completion includes IDE-integrated tools like Cursor that autocomplete or suggest code inline as you type.
Agentic use is a more advanced setup, like Claude Code, that can orchestrate multi-step tasks, run commands, or enforce quality gates.
Headless involves independent agents running in CI/CD, completing tasks without any human in the loop.
For outsourcing teams, the key is narrowing the model’s latent space to your specific domain. That means providing clear guidelines, domain language, and well-defined workflows so the AI aligns with established practices.
Integration usually starts small. Targeting repetitive, error-prone tasks, and then grows into workflows where AI meaningfully augments delivery.

What challenges are common when rolling out AI tools across distributed teams? How can companies encourage consistent adoption among all users and teams?
The first challenge is separating hype from value. We’re likely at the hype peak, and expectations can easily overshoot reality. At Klika, we frame adoption as a quality and resilience investment, not a shortcut to reduce engineers.
Another challenge is avoiding what is colloquially known in the engineering circles as the “tab monkey” behavior, meaning developers accepting AI suggestions blindly and flooding projects with unreviewed code. The best antidote is workflow integration plus guardrails: clear prompting guidelines, embedding AI into CI/CD checks, and keeping generated features small and reviewable.
Consistency comes not from forcing tools on teams but from making AI the natural option for repetitive, error-prone work that humans shouldn’t waste their time on.
The aforementioned negative behavior leads to hastily generated solutions by AI tools, which may introduce hidden complexity or lower-quality code. The risk of hidden technical debt is, so to speak, omnipresent. What are the most common pitfalls, and how can teams avoid them?
The biggest pitfall is what might be called “YOLO vibe coding”: generating large pull requests with AI, pasting them in, and assuming someone else will handle quality checks. That accelerates technical debt instead of reducing it.
And about vibe coding specifically: it’s a misleading term. In my view, “vibe” means developing a sense for what makes a solution good or bad. That taste is craftsmanship, and you can’t blindly outsource it to an LLM. If you stop practicing your craft, you dull that sense because, like any other craft, coding skills come from typing or implementing solutions, and not just by reading or worse, accepting completions.
Use AI as a sidekick, not a replacement. Let it handle the repetitive tasks, but keep yourself in the loop where judgment, taste, and design sense matter most.

In your experience, which types of AI tools have proven most valuable for different use-cases?
Code assistants, especially for onboarding. An agentic tool can parse a codebase, highlight entry points, summarize architectural choices, and cut ramp-up time.
Product discovery assistants are chat tools that help structure requirements, generate options, and align domain vocabulary across teams.
Natural language reporting systems, when tied to MCPs or domain data sources, produce clear, contextual insights without manual reporting overhead.
Personally, I value agentic assistants with specific roles more than raw completion because they allow me to delegate repetitive checks while maintaining control over design.
When first meeting new clients, what is their initial perception of AI-assisted delivery in outsourcing? Added value, must-have, or maybe skepticism?
Most clients begin with healthy skepticism, but it’s encouraging that everyone is eager to find their position in this new context. They’re curious and open, even if they’re cautious.
We focus on showing real, measurable value rather than making big promises. Rather than jumping on the hype train that AI will replace entire teams, we emphasize improving quality and automating human-error-prone tasks.
Once clients see concrete outcomes like that, the conversation shifts from “Is this hype?” to “How do we integrate it safely?”
Do you dare to predict or imagine the ways AI will change the outsourcing delivery model in the next 3-5 years?
It’s almost impossible to predict the future with precision, but here are some bold ideas.
It looks like we are approaching the limits of today’s LLM architectures. The next phase will focus on fine-tuning models for specific use-cases and hosting them securely within local environments, making MLops and fine-tuning pipelines essential skills once workflow mastery is in place.
It’s also interesting that, unexpectedly, LLMs have arrived as inherently democratized technology: they give huge leverage to individuals and small teams, but they don’t magically make enterprise-wide adoption simple. So I don’t see a radical replacement of human engineers without significant consequences on the output. Instead, I expect AI to let an adequate number of people deliver far more, while also exposing and normalizing past over-hiring where it exists.
Another likely effect is that LLMs are igniting a wave of in-house product building. As more companies create internal tools and niche products, the total software surface area will expand, resulting in more systems that need integration and long-term maintenance.









