On AI Assisted Coding in 2024

Scenery

I took the Christmas break end of 2024 to sit down with an AI coding assistant and do some real coding. For this, I used a project that I had prototyped some 10 years ago and that I actually tried to get a patent for in 2003 or so. It's an Android Java project.

Mind you, I'm not a Java coder. I've worked in Java teams for many years and you can ask any Java coder who paired with me: They'll probably tell you that I can be a pain in the rear with all my questions like

But put me solo in front of a Java project and I'm mostly lost. After years of functional programming and more year of not programming at all, my OOO skills are rusty at best.

Also, I don't bring a lot of Android expertise. Yes, I did read about the required part years ago and some of that memory was – surprisingly – still accessible, but expertise clearly no.

So, for this project, I consider myself a very junior programmer.

After some research, I settled on Codeium with their Windsurf editor. I also installed the latest Android Studio. When I first launched the emulator I was pleasantly surprised to find my old prototype still there. I did create a private project on GitHub and also connected Android Studio to my Google account to be able to ask Gemini questions.

High Level Summary

Project Start

This was impressive. On the first evening alone, I used 34 prompts and 70 "flow actions" in Windsurf. I started with an empty project and had the AI build a full project just from my description. My Android project was not a standard project but rather an _Input Method_ (IME). The necessary declarations and permissions ended up in the right places and a central Java class that implemented the necessary methods to hook into the API contracts was created. That worked really well.

After a bit of fiddling with Android studio (hadn't done that in a while; even found an old implementation of the same idea from several years ago), I could run the code on the emulator.

Again: impressive.

But also: weirdly off the mark at times. I wanted to position a few things in a circle and Claude continuously told me how Android implements angles only to then completely fail to do what I asked it to do. I tried it with

and got completely inconsistent results. Every time something else. And each time the machine told me proudly how it had now implemented the logic I described. Just that ... well, it did not. I had to find the code parts that did the computation, understand the approach and then write it myself with the correct computations. Because it was foreign code to me, this took a while.

Take aways from the project start:

The First Week

I kept some kind of recording at the time I was doing the experiment, so even after some time, I can reliably deliver the impressions of the first week.

Already day 2 was a complete disaster. Nothing worked and I even broke what I already had.

The project was moving so fast on day 1 and I was more into experimentation than keeping a working state. So, there was no git involved, yet. Yes, stupid me. I lost a lot of the working code from day 1 because telling an AI coding assistant to roll-back to the last working state just means rolling-forward to it. It has no concept of 'previous state'.

Another lesson learned: Commit every piece of code that works to version control.

The main reason for this is, that the AI coding assistant will happily destroy all your code, even the parts your totally not working on. So, I was working on some concept of the display only and the machine messed with the fully functional logic behind the scenes. Why? I don't fully understand. One thing that seems pretty clear is that the separation of concerns was not well established in the object-oriented design. This will keep sneaking in. Even if you refactor the code at your current skill level, the assistant will work on classes that are a different aspect than what you're currently working on. Yes, it let me review all the changes every time, but let's be honest: Where's the fun in that? Should I really review 15 diffs when I want to make progress. Yes, I should. But I bet, you don't want to. You want to move faster because that's the promise. So, accept all changes, compile and test.

On day 2, nothing worked. I introduced git, so I could reliably roll back over and over again. I tried many different ways of describing what I intended to do. Messed with it a little, rolled back and tried over. At the end of the session (several hours), I had made zero progress (let's just say that adding version control and the code I lost compared to day 1 cancelled out).

This is something that can happen when you code: You have a full day of failed experiments, approaches, tests. Nothing worked. But usually, you still end up knowing more than before. Usually, you've read documentation, tutorials or blogs, maybe watched a video. In doing so, you've made yourself more familiar with the environment you're working in. You've learned about concepts, APIs, available methods and likely learned from the failures of like-minded people who shared their knowledge online. Not so with AI-assisted coding. At least, I didn't feel like having learned anything. Just that "all those prompts didn't work". I was totally frustrated by the end of day 2.

And the same for day 3 and 4. I was working on some haptic feedback. Only later did I learn about modern approaches and standard ways of doing it, because I didn't invest the time to dive into the documentation. Nothing worked. After another two sessions. Similarly, I was dabbling in emojis as a developer for the first time. Wanted to create some popups and do something with emojis. The experience was exactly the same. Prompt, try it out, doesn't work, follow down the path for some time, revert back with git and start again. Learning very little down the path.

The next days, I would spend more time reading things. That enabled me to guide the assistant better. But to me it rips the assistant of its usefulness. Writing the code is not the hard part. And if the assistant can't guide me in the approaches, I might as well read the documentation myself.

To Be Continued

I plan to update this document with more details and more insights.

Expect: