Built several games using Generative AI and the Processing API.
The AI handled most of the code and even generated the graphics, requiring only minor adjustments on my part. This showcased how GenAI can significantly accelerate creative development work.
I wish you would have gone a bit deeper and been more critical with your assessment.
The output seems impressive, until you realize that all those games are all built off of well-know game mechanics, all classics in their own right. So of course there will be manyfold code bases out there for the LLMs to lean into. Output based on prompts that boil down to “build Pacman”, “build Frogger” or “build Flappy Bird” should honestly be approached with limited admiration and not be regarded as litmus tests for how powerful GenAI is.
Don’t get me wrong, I do agree that using GenAI for sketching and prototyping purposes can be a great use-case.
I’ve been using Claude to race through many an idea that has been bouncing around in my mind but I’d have taken days, weeks, or longer to get running even on a rudimentary level. For getting the first feel of my idea and letting me see if I want to pursue it further, using GenAI is mighty good. But all the magic falls apart as soon as you want to flesh out and detail a sketch. OR BUILD AN ORIGINAL CONCEPT. LLMs are truly abysmal at originality. They’re great at repeating popular things. Any advanced functionality you want to have in your sketch in the way you envision it, is essentially a manual coding job again, even if you prompt it into existence.
I’m not dismissive of using GenAI, it’s here to stay, it will only grow more capable and we need to learn to deal with that new reality.
But let’s remain critical and really look at what GenAI output is. And what it is not.
One of my prompts I recently used was literally four words long; “build an emergent sketch”. And Claude just spewed out a neat boids/flocking simulation. One that would require effort to achieve, reading Nature of Code and using intermediate coding structures and knowledge. I first felt impressed. But then somewhat underwhelmed. It’s so uncreative to equate “emergence” with a flocking simulation. Of course it would gravitate to that! In a sense, LLMs will always pick the low hanging fruit. Which in turn means that from now on, these can also no longer be used as entry level coding references. All those game mechanics we imitated to hone our coding skills or where part of coding courses; no longer impressive. Knowing how to build Pong, Tetris, Pacman? Will no longer net you an “A”. It’s who can take those concepts and envision somethings unique, develop it further and find a thing LLMs can’t just vomit out in 3 seconds, that’s where the new challenge lies.
I find this intriguing and am optimistic that this shift will happen. Predominantly for the better.
It bothers me no end that there is probably someone who made and shared code very similar to each of these games, and these people get no credit at all.
Some time ago someone showed me a Tetris made by a LLM, it looked cool, the blocks had some particular decorative detail. I searched a bit and found some repo with the same Tetris, the same decorative detail on the blocks, and that was it. The automatic parrot-with-plagiarism-machine worked very well doing what it does… so I feel very bad about all of this.
Should I stop sharing my code because I don’t want the crooked AI corporations profiting from it?
If I use CC or GPL licenses on my projects, this use is not fair, this does not show respect for my work.