How I Built a Full-Stack App in 6 Days with the Help of AI
By the end of 2025, I had read a lot about people building entire apps with AI, going from an idea to a product in just weeks or even days. That made me ask myself: how are they doing this?
I have more than ten years of experience in enterprise backend development. Seeing these fast, AI-built apps sparked my curiosity. I use AI in my day-to-day work, but only for small tasks, nothing close to building a full product.
So I thought, why not try it myself and see if it’s really possible?
I started thinking about what I actually wanted to build, and it quickly became clear that it had to be something I’d use myself. I’m a runner and I work in enterprise, so the idea came pretty naturally: a running app, but with a twist – tracking fitness while mapping it onto a corporate-style career journey.
That’s how RunCorp came to life.
AI as a fast MVP builder and UI scaffold
My main stack is Java and Python, but I deliberately avoided what I know best. I wanted something familiar, yet still a bit uncomfortable. For the backend, I chose Laravel, since I’ve done some freelance work with it. For mobile, I picked Flutter, even though I had never built a mobile app before.
I bought a Claude Code subscription and started building. I quickly learned that you can’t just tell it to do something and expect a good result. That realization led me to throw away the very first thing it generated.
What really works is using plan mode. You let the AI analyze the problem, ask you questions, and create a plan. Only after you feel happy with the plan do you let it generate code. It feels more like collaborating with a teammate than typing commands at a tool.
The downside is that this approach burns through tokens. If you don’t want to pay for extra usage, you have to wait a few hours for the limit to reset. Even so, I managed to build a basic MVP in six days.
I started with just the UI and mocked data. By the end of the first day, I had an app with all the MVP screens in place. You could actually “use” the app – move from screen to screen and see something -even though real functionality didn’t exist yet. Everything ran on mocked data. For this kind of work, AI proved very effective at generating basic Flutter layouts like this:
class HomeScreen extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('RunCorp')),
body: ListView(
padding: EdgeInsets.all(16),
children: [
Text(
'Welcome back',
style: Theme.of(context).textTheme.headlineSmall,
),
SizedBox(height: 16),
Card(
child: ListTile(
title: Text('Current Level'),
subtitle: Text('Senior Runner'),
),
),
],
),
);
}
}
Nothing complex, but very fast. This is where AI clearly shines: it scaffolds the UI, wires screens together, and handles repetitive setup.
Outdated libraries and testing challenges
After that, I worked screen by screen, building front-end and back-end functionality together. I followed the same workflow every time: create a plan, answer AI questions, refine the plan, answer again, and generate code only after I felt satisfied with the plan.
Here’s the first big problem for someone without much experience: when you use plan mode, the AI asks questions you might not know how to answer. It asks things like: do you want to use OAuth? How do you want to encrypt passwords? Do you want a separate table for user profiles, or do you want everything in one table? You have to make these decisions early. Otherwise, the AI generates something that either doesn’t work or becomes hard to change later.
One concrete backend issue came up during the Strava integration. When I asked the AI for help with Strava authentication, it suggested an existing Laravel package.
At first glance, the suggestion looked reasonable. But after I checked it more closely, I realized the package hadn’t received updates for several years and didn’t support the current Laravel version. It relied on outdated dependencies and failed in a modern Laravel setup.
This example shows where AI falls short. It suggests known libraries, but it doesn’t judge whether teams still maintain them or whether they fit today’s ecosystem.
Instead of forcing the package into the project, I implemented the Strava OAuth flow directly using Laravel’s HTTP client. The core token exchange looked like this:
$response = Http::asForm()->post('https://www.strava.com/oauth/token', [.
'client_id' => config('services.strava.client_id'),
'client_secret' => config('services.strava.client_secret'),
'code' => $request->get('code'),
'grant_type' => 'authorization_code',
]);
$data = $response->json();
A similar issue appeared on the frontend. The AI generated Flutter UI that looked great on a large simulator. Once I tested it on smaller phones, problems started to show up. Text became barely readable, layouts overflowed, and some screens turned difficult to use. A typical example looked like this:
Text(
'Weekly Distance',
style: TextStyle(fontSize: 24),
);
Hardcoded font sizes worked fine on larger screens but failed badly on smaller ones. Fixing this required a shift toward responsive design. I improved the layout by switching to theme-based typography:
Text(
'Weekly Distance',
style: Theme.of(context).textTheme.titleMedium,
);
LayoutBuilder(
builder: (context, constraints) {
return Text(
'Weekly Distance',
style: TextStyle(
fontSize: constraints.maxWidth < 360 ? 16 : 20,
),
);
},
);
The UI technically worked before – it just didn’t work well everywhere. Issues like this only show up when you test on real devices.
When AI-generated code meets real users
After six days, I had an app with enough functionality to be tested by someone. I asked people from my running club if they wanted to try it, and they said yes. I sent them a beta build, and 25 of them started using it. That’s when the real problems began to appear.
After just two days, people reported that the home screen took a long time to load and that the stats screen kept crashing. That was the moment I realized this was the end of just talking with AI. I had to do what I’ve been doing for years: actually debug and review all the code.
The AI-generated backend code looked clean. It passed basic tests and worked perfectly with small datasets, so I shipped it. But once real users arrived, reality hit hard. Requests were timing out, the stats and home pages barely loaded on mobile, database CPU spiked, and every request ran over 20 queries. Classic red flags.
These were problems that wouldn’t have made it to production if I had written the code myself from the start. Experience teaches you where bottlenecks usually hide. AI didn’t optimize anything; it just made things work.
Here’s what the AI-generated backend code got wrong all at once:
- No eager loading, causing N+1 queries
- Queries running inside loops
- Loading entire models into memory just to calculate aggregates
- No caching at all
- Missing database indexes
- Recalculating values that were already stored
- Doing heavy computation in PHP instead of letting the database handle it
None of these were bugs – they were experience problems.
At that point, I had to step in and do what I’ve been doing for years: rewrite the entire statistics pipeline. Some of the key fixes were:
Eager Loading Instead of N+1
$user = $request->user()->load('profile');
Aggressive Response Caching
return Cache::remember("user_stats:{$user->id}", now()->endOfDay(),
function () {
// heavy calculations
});
Query Consolidation (7 Queries → 1)
$stats = DB::table('activities')
->where('user_id', $userId)
->selectRaw("
SUM(distance_meters) as total_distance,
COUNT(DISTINCT DATE(start_at)) as active_days,
SUM(CASE WHEN start_at >= ? THEN distance_meters ELSE 0 END) as weekly_distance
", [$weekStart])
->first();
GROUP BY Instead of Query Loops
->groupBy('week_num')
Using Stored Values Instead of Recomputing
$longestStreak = $user->profile->highest_streak ?? 0;
Strategic Indexes
$table->index(['user_id', 'start_at', 'distance_meters']);
The results were immediate
| Metric | Before | After |
| Queries per request | 20+ | 7–9 |
| Response time | >30s (timeout) | 300–500ms |
| Cached response | N/A | 10–50ms |
| Memory usage | 200MB+ | 40–60MB |
| Timeouts | Constant | Zero |
After these changes, the app finally felt usable in real conditions. The home screen loaded instantly, the stats screen didn’t crash, and the database handled multiple users at the same time. Experience made the difference.
I shipped a working app quickly, but I still had to do a lot of work myself. Coding the basics – the tasks any junior developer can handle – is easy, and that’s exactly where AI excels. It can speed up development, but you need to know when to step in, what to change, what to ask, and how to plan everything properly.
Coding isn’t even the hardest part, at least for mobile apps. You can build the app fast, but you must deploy the backend, set up servers, configure domains, and prepare everything for the app stores.
Then comes the part no one talks about: Google Play beta testing, Apple App Store review, waiting for developer accounts and licenses to get approved, fixing small issues reviewers reject, and updating screenshots, descriptions, and privacy policies. None of this is hard, but all of it takes time.
From the moment I started building to the moment the app went public, the process took just over a month, still very fast.
I realized that today AI becomes incredibly powerful in the hands of someone who knows what they’re doing. For experienced engineers, it can provide a huge speed boost, almost like a superpower. AI acts like a team of juniors handling repetitive work while you focus on high-level decisions.
Someone with solid experience can now build projects that used to require a small team. Can a non-technical person build an app with AI? Yes, to some extent. They can create something that looks decent and runs, but most of the time it will start falling apart quickly.
The main takeaway: the basics still matter. Understanding how things work under the hood makes all the difference. AI doesn’t replace experience, but it amplifies it, and that’s where it delivers real value.



