Career story: From artist to self-taught developer to 3D rendering

Marko Crnjanski

The path to a career in tech can lead from different directions and interests. Maya, Senior Software Engineer, shares her story.

Maya Nediljkovic’s entry into the world of technology was quite unusual, even by some of today’s standards. She says that she graduated from high school wanting to be a chemist, but among other things, she graduated with a degree in painting at the Faculty of Fine Arts in Belgrade. After two years of unfinished sketches and dried paint, the eternal question that every artist faces at some point in their career began: how to make a living?

So Maya realized she was interested in something else besides art:

I have always been interested in computers. Even before I enrolled in art school, the art I did during my adolescence was primarily digital, created on a still-old Wacom Intuos3 tablet.

As she told us, given that her father and one of her sisters were programmers, Maya thought it was something she could try to do. She started learning JavaScript from old books and Lynda.com video content. Then, with some courage, she found her first technical job just before leaving her third year of art school.

Creative + coding = game development

What was your education like as a tech and IT specialist?

Maya: I was entirely self-taught when I started working in the tech industry. However, I soon realized there were gaps in my knowledge that I needed help to fill from haphazard tutorials and books. About a year in, I started pursuing a software engineering degree at a private university that allowed me to do my studies remotely. This made it simpler to keep working while getting my degree.

When the software engineering courses descended into more UML diagramming and specification writing than programming, I needed help maintaining interest. Having always been creative, I switched majors to game development. This new program taught me about computer graphics, more complex algorithms, artificial intelligence techniques in games, and even some concept art and storytelling.

You are developing a computer vision editor. What exactly does that mean, and what does building such a tool in the front end look like?

Maya: A computer vision editor for labeling bridges the gap between human perception and what computers can “see.” Their primary purpose is to enable humans to encode their understanding and recognition of images into a format that AI models can learn from. Building such an editor means creating intuitive and fast tools so labelers can do this job accurately and efficiently. As you can imagine, it’s quite frontend-heavy.

From a product perspective, that means including best-in-class tools to facilitate more accessible labeling. One example is the auto-segment tool, which uses Meta’s Segment Anything Model (SAM) to detect objects within a given region of an image automatically.

Other more manual examples include the brush and fill tools (which I worked on), like the ones you might be used to in programs like Photoshop. Our job was to implement these tools in a web application so they worked correctly and reliably.

From a technical perspective, building an intuitive editor means meticulously focusing on performance. Since computer vision problems often require running complex algorithms on huge images, we must fully utilize all the tools we have at our disposal. These tools include Rust and WebAssembly, image manipulation with WebGL shaders, and plenty of web workers to offload the main thread.

We faced some unique problems most frontend devs have yet to encounter, such as optimizing image data transfer between GPU and CPU, managing canvas contexts, and ensuring our tools create pixel-perfect masks.

React Three Fiber and Three.js tools make it easier for developers to manipulate those 3D virtual representations, display them on the screen, and integrate them into web applications.

Representing 3D world with 2D images

What exactly does 3D Rendering mean?

Maya: 3D rendering is the process of taking a virtual representation of a 3D world that’s stored in a computer’s memory. It translates it into a 2D image, usually shown as pixels on the screen. This can be done in many ways, from mathematical representations and full-on simulations of light and physics to clever shortcuts and approximations. Most of what we see as 3D rendering is somewhere in between.

Tell us more about tools like Three.js, WebGL, and Three Fiber. How do they work?

Maya: When discussing 3D rendering for the web in a modern browser, we are most likely talking about WebGL. WebGL, short for Web Graphics Library, is an API implemented by browsers that allows you to take advantage of hardware graphics acceleration. This means you can write code that will be executed directly on your computer’s graphics card, allowing for a wide range of 3D graphics effects.

As you can imagine, the code you’d need to write to interface with the GPU through WebGL can be tedious and verbose. That’s where Three.js comes in. It’s a library for building 3D applications that abstract many low-level WebGL concepts so developers can use more dev-friendly JavaScript patterns.

There are further levels of abstraction going out from there, such as React Three Fiber, which allows you to interface with Three.js through React components! These tools make it easier for developers to manipulate those 3D virtual representations, display them on the screen, and integrate them into web applications.

JavaScript just enables people to build

What challenges do you often face daily when working on demanding tech projects?

Maya: As I see it, a tech project is only ever-demanding due to a need for more information. If I struggle with a project, I must include some vital pieces. Either it’s a more profound knowledge of the codebase or the technology used, or it’s a better understanding of the use case or project requirements. 

For example, a project might be challenging because the requirements aren’t precise. By speaking to a product manager or the clients themselves, I can figure out what problem the project is trying to solve and the best tools to solve it. Every demanding project starts and continues with lots and lots of research.

You say JavaScript is your favorite language; why is that so?

Maya: Almost all of my work is done in JavaScript or TypeScript, but I have had the occasional foray into Python and Rust. I strongly prefer using TypeScript to “plain” JavaScript since its type safety feels like the guiding hand of a wise sage. However, the two are so similar in the grand scheme of things that arguing semantics is unimportant. 

I love them because they are both incredibly accessible both for developers and for end users. For developers, now that JavaScript can be run anywhere, we have a much greater surface area to apply our skills. Also, due to its popularity, there is a plethora of tooling to make development easier.

For end users, the browser has become the de facto “place to do everything on the web.” This means that the applications we build can be accessible to our users on various devices and locations with just a click of a link. No installation is required. It also means that when our users need a dedicated mobile or desktop app, we can ship them the same web application experience with tools like Electron or React Native.

I could get into the features of the language. It’s more flexible and adaptable than others, but JavaScript’s ever-growing ecosystem and presence are a testament to its versatility. It may be more elegant and robust, but it has proven to be the language that enables people to build. From a tool, I could ask for nothing more.

> subscribe shift-mag --latest

Sarcastic headline, but funny enough for engineers to sign up

Get curated content twice a month

* indicates required

Written by people, not robots - at least not yet. May or may not contain traces of sarcasm, but never spam. We value your privacy and if you subscribe, we will use your e-mail address just to send you our marketing newsletter. Check all the details in ShiftMag’s Privacy Notice