pygame-based video capture on Linux

I haven't had a ton of luck using things like recordmydesktop to get footage of games in Linux. So today I put together a rough draft of a module that allows you to "record" a video directly in your game. To get the video, it simply dumps the window to a file each frame. This can then be sequenced into a motion PNG or something afterward in mencoder. To get the audio... well... any ideas?

The horrible hack I currently have for the audio is to write down the times that every sound is played, then use pygame.sndarray to reconstruct what the audio was during the game. I'm not even handling things like fadeout yet, and it's already a mess.

Still, it works okay. I think the results are better. Here's a video of my Ludum Dare game I made using it. The original AVI that I uploaded to YouTube looks even better, since the video is completely lossless. For comparison, here's a video of an old pyweek game I made using recordmydesktop.

Is this something anyone would be interested in? Any suggestions on how to do this better, particularly the audio part? Thanks!

(log in to comment)

Comments

Yes, you can configure Pulseaudio to expose a post-mixer stream on a network socket (module-simple-protocol-tcp). You can then read raw PCM bytes from that socket as you are recording.
That sounds perfect, thanks! Unfortunately I don't know the first thing about network sockets or Pulseaudio. I haven't been able to find yet an explanation of how to access the socket that Pulseaudio is streaming to, or how to read raw PCM bytes from it. Does anyone have any tips on either of these?
One way would be to construct a GStreamer pipeline - Probably chain together tcpclientsrc to read the data from the socket, specify the raw format, some encoder - wavenc or encode straight to MP3 or Ogg - and then filesink to save it to a file. You pretty much just build the correct gst-launch commandline and run it, and it continues processing until you kill it. This is easier than it sounds because GStreamer is awesome:

Effectively the commandline would look a bit like

gst-launch tcpclientsrc ! audio/x-raw-int,<format details> ! wavenc ! filesink

With more effort you could even build a commandline that reads your screen grabs at the same time (on a network socket) and multiplexes the video and audio stream. You might also be able to read the sound from the soundcard rather than reconfiguring Pulseaudio - I'm not sure.
Awesome, thanks, I didn't even know where to start. I'm keeping the audio and video separate so that even slow computers have a shot at making a low-latency final product. So all I want to do in this step is capture the raw bytes to a file.

I need to pass a port number to tcpclientsrc, correct? Where do I get that from? I assume it's specified in the Pulseaudio config, but I don't see it.