How Developers Should Build AI Tools – So The EU Doesn’t Lose IT

The August 2026 deadline for the EU AI Act is getting close, and companies and developerds building AI products are starting to feel it.
High-risk AI systems need to be compliant by then, and the ones doing it well aren’t treating it as a last-minute legal scramble. They’re building compliance in from the start.
We sat down with Ervin Jagatic (AI Business Unit Director, Infobip) to talk about what that actually looks like at Infobip, and why compliance-by-design is turning into something engineers think about, not just lawyers.
Compliance starts in the design phase
AI Act compliance doesn’t start at deployment. Ervin is clear on this: it has to enter during system architecture, before a single line of agent code is written:
Compliance enters during the design phase – system architecture, data flow planning. Every layer of our AI Agents product, from planning to memory to tool execution, needs to be designed with traceability and human oversight in mind. We can’t bolt that on after the orchestrator is already coordinating multiple sub-agents autonomously.
The AI Act is changing product development in 3 ways
That shift has already changed how Infobip’s teams design and ship AI-powered features. Ervin points to three major changes that came directly from the AI Act.
1. Transparency and auditability
Transparency is the first. Infobip’s AI Agents documentation is explicit: “you cannot script exact responses” – agents “generate responses dynamically.”
That unpredictability is exactly why the company expanded its logging and analytics infrastructure, Ervin explains:
The AI Act’s transparency obligations pushed us to build comprehensive logging into our Insights and Analytics layer. Every agent execution now produces detailed logs – requests, responses, processing steps. That’s not just good engineering, it’s a direct response to auditability requirements.
2. Explicit guardrails instead of assumptions
The second shift relates to behavioral boundaries and guardrails. Infobip now requires customers to define capability boundaries, mandatory restrictions, and compliance rules directly inside every agent’s system prompt, Ervin points out:
Our own documentation warns that if you do not explicitly define these constraints, the agent makes assumptions. That design philosophy, forcing explicit guardrails rather than relying on implicit model behavior, comes directly from the Act’s emphasis on risk mitigation by design.
3. Human oversight is a part of the architecture
The third shift is human oversight – not as an external policy layer, but built directly into the product architecture. Ervin explains:
AgentOS uses a human-in-the-loop model where complex issues are escalated from AI agents to human agents. We are talking about a core architectural decision that applies human oversight requirements while also improving the product.
Why compliance-by-design is becoming the standard
Ervin believes compliance-by-design is quickly becoming the new industry standard, particularly for teams building enterprise-grade AI systems:
For developers and ML engineers at Infobip, compliance-by-design means several practical things. It means every AI agent we build has a defined architecture where an orchestrator coordinates sub-agents, each with explicit scope, tools, and behavioral rules.
It also changes how engineering teams think about data. “It means our engineers think about data lineage and provenance from the moment they design a training pipeline, not because someone from legal asked them to, but because the architecture demands it,” Ervin points out.
To support that approach, Infobip invested heavily in tooling and analytics infrastructure that now serves both operational and regulatory purposes, Ervin said:
Our Insights and Analytics platform is our compliance infrastructure. When a regulator asks ‘show me how this AI system made this decision,’ we need to answer that question with structured evidence, not anecdotes.
Risk assessment depends on the use case
Internally, the company approaches risk assessment through a framework closely aligned with the AI Act’s four-tier classification model: unacceptable, high, limited, and minimal risk. However, Ervin notes that Infobip applies this framework at the feature level rather than only at the system level:
This is important because a platform like Infobip’s serves vastly different use cases. An AI gamification tool for lead generation on WhatsApp is a fundamentally different risk profile than an AI agent that handles authentication.
The company evaluates risk based on several factors, including the sensitivity of the data involved, the autonomy of the AI component, and the intended use case, Ervin explains:
Our internal process follows a lifecycle approach. During identification, we map known and foreseeable risks, including risks from reasonably foreseeable misuse. During estimation, we assess probability and severity. During mitigation, we implement design controls, testing procedures, and human oversight.
Monitoring continues after deployment through analytics infrastructure designed for drift detection, incident investigation, and performance tracking. For enterprise customers, risk assessment also becomes a collaborative process between Infobip and client compliance teams.
A bank using our AI agents to automate customer support has different risk considerations than a retail brand using the same technology for product recommendations. The platform is the same; the risk profile is not.
August 2026 is approaching…
As August 2026 closes in, Ervin says the conversation has shifted:
The question is no longer whether to integrate compliance into product development. The question is whether you’ve built the infrastructure to do it at speed.


