Dandelion: Thanks for the feedback!

Thanks for the kind comments and generous ratings! It seems undecided whether mini-games are a good idea for PyWeek. From the production perspective they definitely are — you can always add more if you have time or drop some if you don't. It's not easy to make them satisfying but since they are short, we hoped the player would not mind that much.
  • SO PRETTY, a beautifully crafted game, unfortunately the fun apsect was let down by it being really easy with no way to lose
Hah! You just did not try hard enough! You can actually avoid collecting any fairies and lose. It's easy to make a mistake and get one by accident, so this is a more challenging way to play the game.
  • Beautiful. No frustratingly difficult sections you have to master to get past. Just pretty things to look at, pleasant varied tasks to attempt, and more shinies the better you do instead of a numerical score. The only negative thing is it took me an hour of installing non-free Nvidia drivers, breaking my X11 setup catastrophically, booting a previous kernel version, manually editing /etc/X11/xorg.conf and finally getting the PC working again, before the thing would actually run. Actually, it was STILL worth it. I will be plundering the code shamelessly to learn how to do OpenGL properly.
Wow, thanks for the perseverance and the praise! It's a shame we couldn't make the game more portable. Video drivers offer a surprising diversity of OpenGL implementations... As for learning from our code, know that it is the product of a struggle between Alex and myself :). One thing we couldn't agree on for example is whether to have the simulation depend on the frame rate. He says it's more correct, I say it's more error-prone. What do you think?

(log in to comment)


My thoughts about the framerate issue:

I personally like to make games request vsync to eliminate tearing, and then scale the simulation by the resulting framerate, If you don't scale, then some people are playing at 60fps, but others, with slower machines which can't manage full framerate, presumably end up at 30fps, playing the game at half speed. This doesn't seem acceptable to me.

So I scale. But if framerate drops very low (I choose 25fps) then I stop scaling below that point, so below that my sim starts to run gradually slower. This is so that any transient slowdown in framerate (maybe due to other processes or inopportune IO) don't cause any very long frames (e.g 100ms) to make my sim explode. Maybe at these slow speeds, one could display a warning message or 'tortoise' icon, to warn the player that their slower hardware is impacting game speed.

I recently saw some C library or gamemaker (I forget which) which had a nice variation on this: If framerate dropped below the expected 60fps, then they turn off vsync, so that the game doesn't suddenly drop to 30fps, it instead gracefully degrades, albeit with tearing. This seems like a nice idea which I'd like to try out.

I'm aware that there is the "most correct" solution, of decoupling sim updates from screen updates, which seems to then lead to running several/many sim updates per screen refresh. This is something I've heard recommended, but have never personally tried. Presumably this assumes that sim updates are cheaper than screen refreshes? If I was going to try this, I'd want to check that was actually the case.
Nice post, tartley! Interesting thoughts there...
I have been doing mainly time-based updates in all my games for the last few years. I don't think it's too dfficult once you remember your constant acceleration formulas and compound interest formulas etc.
Sorry, was on Android and the editor widget stopped accepting typing, so the last comment was lacking a lot of caveats.

As tartley says, large time steps interfere with things like collision detection. The cheap way I've fixed this in the past is just to clamp the maximum time step I pass to my world update, eg.

dt = clock.tick(FPS)
dt = min(dt, MAX_DT)

You could run more steps that are each of duration at most MAX_DT:

dt = clock.tick(FPS)
steps = int(math.ceil(dt / MAX_DT))
sdt = dt / steps
for i in xrange(steps):

You can also increase the number of world updates you do anyway, useful if your game involves fast-moving objects, since doing more updates gives better collision detection unless you're using über-correct "point of first intersection" calculation.

This kind of time step manipulation is all cheap if you've written it with time-based physics in the first place. I also find it's easier to guesstimate velocities and accelerations if it's in pixels/second rather than pixels/frame.

I don't like the idea of completely decoupling time steps and screen refreshes. I think that could produce bad results if the framerate was just slower than one world step: sometimes you'd hit two constant time step updates per draw, sometimes one.

@mauve: Yes, where I wrote " if framerate drops very low... then I stop scaling below that point", then what I meant by that was exactly the kind of clamping mechanism that you explained with precision. Thank you.

I understand (and share) your reservation about decoupling leading to temporal aliasing problems, but I've seen that approach recommended by oft-cited blog posts of unknown provenance. :-)



As I understand it (without ever having implemented this myself) I think the problem has two solutions:

a) To solve it approximately, run several sim updates per screen refresh. Then the temporal quantisation drops in size. i.e. instead of running 1 sim update per frame, which occasional 2 sim updates per frame, you are running at 8 sim updates with occasional 9 per screen refresh, and the difference isn't visible to users.

b) If fix (a) is too expensive, then reduce the number of sim updates until it becomes cheap enough to run on chosen target hardware. This decision can be made at design time, not runtime. Then you might be running 2 sim updates with occasional 3 sim updates per screen refresh. This is better than the original case before fix (a), but still not enough to reduce the problem below perceptibility. So to correct for that, instead of running N sim updates each of exactly 'dt / N' time duration (where dt is time between screen refreshes), instead do N + 1 sim updates per screen refresh, where the final update isn't a whole 'dt/N' duration, but is instead the fraction of this required so that your sim updates by the same time period amount between each screen refresh.

I don't know anything about how difficult this is to get right, how prevalent it is, or whether it's controversial. Personally, I'd be interesting in trying step (a) at least - it seems to me that even if you only do 3 or 4 sim updates, you might reduce the problem to imperceptibility.

Sorry to ressurrect this old conversation: I just got back from a two week vacation.
Plus! I just remembered: Last time I used it (a couple years ago) Chipmunk was documented to not be happy if you supply fluctuating 'dt' to its update steps. I don't know if this is fixed in recent versions of Chipmunk, nor what symptoms it would provoke, nor if this is a limitation that is perhaps fundamental to the problem, and hence will be exibited in some form by other physics engines too.
I just want to say this game is an inspiration. Do you have it on a git repo somewhere? I'd love to see it evolve. Congratulations PS: I tried to give you an award, but it says it won't recognize my valid png. I guess it's because it's so late now.