Day 1

I got quite a bit of code out yesterday. I very quickly got a terrain demo up and running in pygfx:


Terrain in pygfx

But I wanted to write shaders and pygfx has an API for that but it's obscure and barely documented. ChatGPT is barely spitting out correct pygfx code - it doesn't know it very well. Nor do I. I found ChatGPT and Copilot are pretty good at spitting out moderngl + numpy code. All programming is now AI pair programming so I went with what my AI pair programmer was comfortable with and switched to moderngl.

The next thing I tried was to get o3-mini-high to produce a bunch of separate graphics demos that I can then refactor into a whole. First the classic 2D ripples algorithm. This was everywhere in the naughties but this version is on the GPU:

Ripples demo

This took a lot of debugging though: the AI didn't store a velocity component, and I manually added one but didn't spot some code that reallocated a texture to only have one component.

Then I got it to do a water plane with cube map and Fresnel. The input is a heightmap which should pair with the ripples code.

Water with reflection

After a lot of refactoring of the demo code I finally got the water and the terrain, with transparency based on screenspace depth. The AI got me in the ballpark but then started to offer changes at random, so I got it over the line manually.

(log in to comment)

Comments

What did you expect from it in large scale project?
Nevertheless, that's some serious progress.