Thread safety of the memory allocator

After getting up to date with the HmH code recently, I've been getting random but consistent crashes (more specifically, general protection faults). I first managed to nail down the cause of it to the audio mixer, and then further narrow it down to BeginTemporaryMemory & EndTemporaryMemory inside the mixer. The reason these methods are causing problems is that CoreAudio on OS X is based on a pull model, so providing samples to the OS happens inside a callback that is called on a high-priority thread (i.e. I call GetSoundSamples from this callback instead of the game loop).

I don't recall, but was there ever talk of making the allocator thread-safe at some point? Or is there some way to make calls to BeginTemporaryMemory & EndTemporaryMemory thread-safe?

I could be wrong, but my feeling is that the problem arises when there are unbalanced calls to Begin/End temporary memory, and this causes the used & size values in the memory arena to return incorrect addresses. Hence crash. ;)

Edited by Flyingsand on
Yes, you definitely cannot do what you are doing, assuming your description is accurate. It is actually not just the memory allocator that would not be thread safe, it is also the storage of the playing sound list that is not thread-safe.

In general, nothing in Handmade Hero is thread-safe at the moment except the parts that run in threads :)

With respect to memory allocation in general, we do make a thread-safe version of this on the stream. Our task system allocates sub-stacks for tasks that they manage on their own, to avoid having to interlock with the main code. You can see this at work in, for example, the background ground chunk builder.

- Casey
Ah yes, I will take a look at the SubArena system and use that as a starting point for making the allocator thread-safe. I'm going to do this as an exercise because I don't have that much experience dealing with multithreading at this level (I really enjoyed the episodes where you built the queue system for this reason).

The alternative is to call GetSoundSamples from the game loop, but write the samples into a ring buffer which the callback then reads from. But this gets fussy because they're called at different rates, so you have to constantly monitor and possibly adjust the number of samples you're reading/writing to make sure the read & write pointers don't cross.
Sorry to resurrect an old thread but I had some questions on the same topic.

I was also thinking about which approach to take for my project in my OSX layer: a ring buffer on the main thread or write samples directly in CoreAudio's render callback IO thread.

I remember Casey mentioned in one of the early videos that it might be a good idea to move the audio output code on a separate thread on Windows. Is this a common approach in games?

Also, even if the output is happening on a separate thread is it still a good idea to use the ring buffer approach (in order to make sure that the audio output code is as fast as possible)?

I was also thinking whether doing the audio render in a separate thread is a bad idea from from an architectural point of view since it makes the game-side code worry about thread synchronisation issues even though no thread code is visible on the game side (I mean that we didn't push any work on a work queue).
Yes, it is very common approach in Windows games to have audio in separate thread. This is to avoid unexpected delays/spikes which might be not game code related, but OS messing with your code.

With separate audio thread you have to approaches for synchronization.

First one is to do mixing in main thread, and just pass fully prepared buffer to audio thread. Only synchronization you need to worry is about fully prepared buffers, not individual audio sources. Disadvantage is probably higher latency, because once buffer is submitted to audio thread, you cannot overwrite it (depending on what actually is available from OS). So you need to prepare large enough buffer to cover full frame of delay.

Second one is to do mixing in audio thread. In this case you need to synchronize actual audio sources and make sure you don't change your data structures in a way where race condition happens. Ideally if you can load all audio sources at startup / level loading, and can guarantee they will be alive all the time, then synchronization becomes trivial. Only tricky thing is to figure out which sources are active, and that probably can be done with some interlocked ops. Advantage of this is that you can get smaller latency because you can mix in smaller chunks - all the mixing happens in background thread, so any delays in main game thread doesn't affect it.
Thanks, that makes sense. I'll try doing the mixing on the high priority thread first, as this approach is the most straight forward at this point in my project.