Want Better Security? Test Like Attackers Would
AI moves fast, and so do the threats that come with it.
Roland Liposinović, Security Governance Generalist at Infobip, sees a critical shift: security should no longer be an afterthought or a compliance checkbox:
Make security a growth tool, not a tax. Build safety in from day one, and audits finish faster, big customers say yes sooner, and purchasing roadblocks disappear.
This mindset shift is one many organizations still struggle to make.
Too often, security is treated as a necessary evil, something to appease auditors and regulators. But Roland argues that security-first development is a competitive differentiator. By integrating controls early, companies can unlock new markets more quickly, shorten sales cycles, and establish trust in ways that directly impact the bottom line.

Security isn’t red tape. It’s quality control for modern, AI-powered products. Trust drives sales. Secure design grows trust over time.
Embed security across the entire development lifecycle
The question, then, is how to embed these practices across the entire software development lifecycle, from planning and coding to testing, deployment, and operations, without slowing down AI-driven innovation.
Roland’s answer is to make security invisible, automated, and developer-friendly.
“Think automatic seatbelts, not checklists,” he says. At Infobip, the team embeds rules directly into their cloud setup and delivery pipelines, ensuring that “the safe way happens by default.” Automated checks scan for vulnerabilities, exposed secrets, risky dependencies, and unvetted model files every time developers save code. If something is off, the build fails fast with clear feedback.
Feedback should arrive in minutes while developers are still working, not days later. Security runs next to the team, not in front of it, blocking the door.
One additional thing you can do before major projects is run lightweight risk assessments focused on a few key questions: How could this feature be misused or abused? What data does it touch? Who could misuse it? This practice, repeated whenever something changes, enables threat modeling to remain fast and continuous.
When it comes to testing, you should test like attackers would. “We throw malicious prompts, poisoned data, and guardrail-breaking attempts at our AI systems before release,” he says. “If our AI misbehaves, we fix it before anyone else can exploit it.”
Make it zero-trust!
AI helps defenders, but it also helps attackers. Adaptive phishing, deepfakes, and model inversion attacks are no longer hypothetical – they’re real. Roland advocates for a layered defense strategy that combines privacy-preserving techniques, governance frameworks, and culture change.
For model inversion, he points to regularization techniques, API access controls, and specialized defenses such as trapdoors to misdirect attackers.
However, data discipline matters just as much: teams minimize personal data, de-duplicate records, and apply strong consent and retention policies before training models. On the human side, Roland is blunt:
Make it zero-trust. Strong login, least privilege, and constant verification for people, services, and AI models.
His team conducts frequent, audience-tailored awareness sessions and real-world drills, ranging from deepfake scenarios to phishing simulations, so that employees can recognize emerging threats.
Strict communication rules help as well: sensitive actions like payments, access changes, and data requests must go through verified channels with two-person approval, never via informal messages or DMs.
If a request is urgent and secret, slow down. We give staff explicit cover to pause and verify even if it is “the CEO” on the line.
Security isn’t a cost center – it’s a revenue enabler
Roland emphasizes measurement as the bridge between technical controls and leadership buy-in. He recommends tracking metrics such as time to close deals, incident rates, audit duration, and verification rates before taking high-risk actions. Mapping controls to established frameworks, such as CIS Controls v8, NIST 800-53, and ISO 27001/27002, streamlines audits and makes funding more defensible.
When you can prove that certifications and clear proof of controls shorten sales cycles and open partnerships, suddenly security isn’t just a cost center. It’s a revenue enabler.
Treat the pipeline like production
As AI accelerates software delivery, the CI/CD pipeline has become the beating heart of modern development, but it has also become an increasingly attractive target for attackers. Roland warns that organizations can’t afford to treat their delivery pipelines as second-class citizens.
Treat the pipeline like production. It runs the factory, protect it like crown jewels.
Securing automated delivery flows starts with proof, not trust. Only signed code, images, and models are allowed through. “If it isn’t signed, it doesn’t ship,” he emphasizes. The system automatically scans dependencies, containers, secrets, and cloud configurations, and it halts the build immediately when it finds critical issues.
Teams isolate access using short-lived tokens and separate runners, eliminating the need for “kubectl from a laptop” shortcuts. The pipelines themselves are under constant surveillance.
Alert on strange runner behavior or workflow changes. If something looks off, pause and investigate before it spreads.
Track behavior, not just components
This rigor extends beyond internal code. As AI ecosystems grow more interconnected, the supply chain, spanning third-party libraries, pretrained models, datasets, and vendors, has become a prime target for sophisticated attacks. Roland advocates for a “trust, but verify” posture.
Track what’s inside. Ship an SBOM with every release, apps, containers, and model bundles, so you know every ingredient.
Teams must sign and verify every artifact, from model files to data packages, before using it. Teams don’t take third-party components at face value; they vet vendors for lineage, update habits, and incident history, and they require formal attestations.
Pretrained or open-source models are quarantined by default until they are scanned, wrapped, and continuously monitored for security vulnerabilities. Once the system is in production, teams can track behavior in real-time.
Don’t just list components, watch what they actually do.
Classify your AI by risk
With the EU AI Act and new data protection laws reshaping the regulatory landscape, Roland sees compliance not as a scramble at launch but as an architectural principle.
Design for the EU AI Act and friends from day one. Classify your AI by risk, attach the right controls, and plan human oversight where it is required.
This regulatory-first mindset drives concrete engineering practices: teams bake data minimization and purpose limitation into schemas and pipelines, not just policy documents. Model cards, decision logs, and clear appeal paths make AI decision-making explainable for both auditors and end users. Teams maintain immutable logs and model lineage with retention policies that align with legal obligations.
Be audit-ready, always. It’s much cheaper than retrofitting compliance later.
Get all signals in one place
As attackers adapt their tactics in real time using AI, defensive strategies must also become equally dynamic. Roland emphasizes the importance of observability, telemetry, and AI-driven defense in detecting anomalies before they escalate.
“Get all signals in one place,” he says. Teams aggregate logs from endpoints, identity systems, APIs, data jobs, and models into a single observability layer. Crucially, security teams monitor not just applications but the models themselves, tracking drift, unsafe outputs, and suspicious prompt behavior.
Detection should be trained on your own operational environment rather than generic threat baselines, so anomalies stand out quickly. When something triggers, automated playbooks in SOAR systems can take immediate action: isolating systems, rotating secrets, revoking tokens, or rolling back versions within minutes.
You can’t wait for a human ticket queue when the attack is adapting on the fly.
Make security a team habit
For all the technology and governance, Roland insists that the real force multiplier is culture. Security can’t live in a silo, it has to become a team habit.
Plant security champions in every squad. Peer-to-peer help beats one central bottleneck.
Regular tabletop exercises tied to real-world projects, such as handling deepfake scams, leaked credentials, or prompt injection attacks, keep teams alert and well-practiced.
Teams can also use positive reinforcement: they can celebrate clean audits, sharp threat models, and early bug catches publicly. To make good behavior the default, Infobip provides “paved roads,” opinionated templates, and secure defaults that make the safe path the easiest one.
With these layered strategies Roland is helping redefine what “secure AI” looks like in practice.
Security isn’t a brake on innovation. It’s what lets you innovate safely and keep that advantage.


