4 Lessons We Learned from Bringing AI to Our Company
We’re living in an era where everything and everyone is focused on transforming your experience into working with a copilot of sorts, helping you become more productive by handling mundane tasks.
Beautiful, smart deep learning models are exposed to an ugly CLI where you type in your thoughts. While this can lead to success, you have to play it smart – it doesn’t always work out in the end, nor is it as straightforward as it sounds.
That’s why it’s fascinating to explore the common pitfalls that come with the many types of GenAI assistants being aimed at individuals and teams.
Prepare for the AI Assistant Era
Microsoft has over 30 copilots on its roadmap. When success is uncertain, it seems the strategy is to take out the Gatling gun and spray it across the field. Here, the limitation isn’t so much the technology as it is imagination and identifying the right use cases (hence, 30+ versions). To succeed, preparation is key.
And this is where top-down support becomes essential. Whether it’s company policies that give you the freedom to explore new technology safely or having C-level executives as sponsors of the initiative, this support is crucial for success.
Luckily, at Infobip, we have both, and our team has received tremendous support from the top for various initiatives where new technologies and processes are key.
We also experienced a situation where several people joined in a short period of time. Given our work with complex Product Development and Engineering topics, we were the perfect candidates to test how well a piece of technology could help us cover all the bases. During this time, we became heavy and powerful users of a wide array of GenAI tools.
Code assistants? Check.
Custom-deployed internal GenAI workspace with our own data and custom APIs serving function calls? Check.
FastAPI endpoints for specific, often novel tasks like GraphRAG? Double check.
It wasn’t easy, but we wouldn’t go back. This is our story of lighting the match and keeping the fire burning.
Do the Intro – But Don’t Stop There
So, you have a shiny new piece of technology that everyone’s talking about and at least one enthusiast ready to light the fire. How do you start?
You begin with an introduction to the team – a bit of theory, a bit of practice.
Show concrete examples that people can try right away or at least comment on immediately. I’d emphasize the ‘a bit of theory’ part – after all, you can’t turn everyone into a Machine Learning Engineer overnight.
But that’s not the only potential pitfall. Often, people give just one demo, and that’s it. Everyone returns to their usual tasks, and you don’t get any real traction. You might catch a few intrinsically motivated individuals, but that’s not enough to make a wider impact.
So, after the demo, we rolled up our sleeves and started using it for everything – chatting with web search, brainstorming, you name it. The ice was broken.
Deploy On-Premise and Make Your Security Team Happy
Now that people are aware of the technology, you want them to freely use it every day without any obstacles.
Next, you’ll face potential roadblocks, as privacy and security teams will be looking into where the models you use are hosted and where the data is stored.
Chat.com, Gemini.com, or anything free of charge and a privacy nightmare is out of the question.
This is when we deployed our custom GenAI workspace on our own VMs, connecting it to the GenAI models.
For starters, you don’t need the on-prem hardware, so we hosted the models in our tenant with our cloud provider. This ensures data security, allows us to leverage internal data for analysis, broadens the context and understanding of the LLM, and even enables fine-tuning with our own data.
When making the app or framework approachable, don’t lock it behind a wall of code that only your ML engineers understand. It’s crucial to make it easy to access and personalize.
As engineers, we prefer deploying solutions we’ve created or customized to suit our needs. The best approach is to explore open-source projects or build your own solutions using low-barrier frameworks like Gradio or Streamlit for the UI and LangChain, LangGraph, or Llamaindex for the backend.
For the first option, you’d need some DevOps know-how, and for the other, some developer resources. But since we had both, it was easy. We use open-source workspaces for day-to-day tasks and custom implementations via FastAPI with LangChain or Llamaindex. We often integrate these to enable our no-code colleagues to benefit from them as well.
Don’t Let People Get Lost in the Resources
If you’ve come across new terms, acronyms, or frameworks in the previous paragraph, I feel your pain. With so much new terminology and architecture constantly emerging, it can be overwhelming.
Also, it’s a mess right now if you’re blindly venturing into the field of (Gen)AI. Everyone is trying to sell you their academies, courses, and whatnot. Learning platforms like Udemy and video platforms like YouTube offer hundreds of thousands of different versions of the same thing, and unfortunately, not everyone on the other side of the screen is an expert, nor should you listen to all that noise.
The truth is, quality beats quantity every time.
Trying to get through an 18-hour learning path just to get your bearings right? Think again, you can spend your time more wisely – and so can your colleagues.
So, we skipped all of that and took a focused approach, compiling the best terminology and architectural patterns with plenty of examples.
For instance, something like RAG (Retrieval Augmented Generation) could be explained in plain English to anyone in the company, without delving into why ada-002 produces 1536-dimensional vectors.
How you want to do it is up to you: workshops are fine, as are microlearning courses on your internal LMS; in-person hackathons are even better. Don’t limit yourself to just one approach.
It’s hard to over-communicate here and very easy to fail fast with just one intro session and call it a day. If you have a Learning and Development department or someone specializing in instructional design, partner up to create internal content or at least compile a custom learning path. Your colleagues will be grateful.
Don’t Know What to Do? Do a Hachkathon!
Hopefully, I have shed some light on how to avoid common mistakes by now, but there are still some low-hanging fruit for you to pick.
You’ve got your tools set up and resources ready as knowledge support, and along the way, you’ve partnered with some incredible new people. Best of all, your team is now organically upskilled in the new tech just by actually building things with it and using it daily.
However, the team is not the whole company, and it would be foolish not to set up some kind of feedback loop now that you’ve got a few hundred or a few thousand users daily.
Business is wildly different between what we do in Engineering and what, for example, our colleagues do in Marketing, Legal, or other departments. The beauty of GenAI is that everyone can use it for some aspect of their work, but this is also where people need help when they run into a wall – or, even better, have an idea of how to improve.
Reaching that point offers a valuable opportunity to innovate. GenAI is still in its discovery phase (though some things, like coding assistants, work well), and applying it to internal processes can save departments hours or even weeks of work – an excellent outcome.
Without going into how your company does culture internally, I think hackathons are a great way to break the ice and focus on innovation. We used them to give the process time, space, and the right people.
With over a year of experience and the right resources, we used feedback loops to identify and tackle low-hanging fruit.
Things are very different now, with projects emerging unexpectedly. GenAI isn’t the solution to every problem, nor the key to bringing any idea to life, but its unique advantage is speed. The iteration and experimentation speed it enables is unmatched, making this an exciting time for builders and tinkerers.