It is true that pre-C++11 on the Visual Studio compiler, volatile provided both atomicity and memory visibility (acquire/release) guarantees.
Not sure how to say this any more forcefully: we have never talked about, or used, any meaning of volatile that is not the traditional one guaranteed by the C specification
I think what you are saying (please correct me if I am wrong) is that because the volatile keyword forces the compiler to generate code that always reads/writes to memory (which could be just its local cache) for the current value of a volatile variable that it is guaranteed to see a value written by another thread.
Correct. All x64 processors ensure that if a write is issued to a memory location, a corresponding read from that memory location on another CPU core will see the result of the write. So the only requirement for memory visibility on x64 is that the compiler must not optimize out the loads and stores to that memory, hence the need for volatile.
Note, again, that this has nothing to do with atomicity
. InterlockedCompareExchange is what guarantees the atomicity. volatile is only
about preventing optimization.
That is a statement about memory visibility (and hence the memory model) and is not true in a portable sense
Well, it may not be true in a porting scenario that we are not interested in - ie., in a scenario where you used a machine where the cache was completely manual, and you had to issue instructions to write it to memory or invalidate it. But if we were to port to a platform like that, we would have to do a lot of other
work as well that has nothing to do with std::atomic. For example, how would the thread that does the blit see all the rendering work that all the other threads did? None
of the memory accesses performed by the other threads would be seen by the blit thread, so technically we would have to go through and manually invalidate every single tile's memory in order to run correctly on this platform.
So, yes - volatile doesn't help that scenario, but neither does std::atomic :) This is why the queue code is very specifically in the platform-specific portion of our code: because you really do want to know what platform you're on when writing multithreading code, because you need to take different steps to ensure correctness on different platforms, and performance is a concern so you cannot just mark up your entire framebuffer with std::atomic or you'd run pathetically slow for obvious reasons.
I am worried about whether the writes to CompletionGoal and CompletionCount are guaranteed to be visible when compiled with, say, clang for windows (when that is feasible)
There is nothing specific to the compiler that we are relying on here. clang for Windows, or Linux for that matter, will work just fine. You have to change the atomic intrinsic, of course, because clang uses GCC syntax (you change InterlockedCompareExchange to __sync_lock_test_and_set). But other than that the code is the same.
I am only going by the sources I have found by people I feel I should be able to trust (Stroustrup, Meyers and Sutter).
I wouldn't trust that trio to program my VCR, let alone my queue code. But I also suspect that they are not actually contradicting what I'm saying, if you read them carefully (I'm not going to, that's going to have to be up to you :P ), but rather just trying to say that since people often think volatile
does something better than just preventing load/store optimizations, they would prefer it if they started using std::atomic instead since that does atomicity.
If you feel my interpretation of what volatile guarantees in portable code is incorrect, I would really appreciate a source when I can go to update my knowledge. I am, after all, following HMH to learn and improve.
Well, as long as you understand what volatile
actually does (prevent the compiler from optimizing out loads and stores), then I think you don't really need any more sources, you just need to think about what you are doing a little more carefully! It sounds like you understand what volatile
does, you are just coming to some odd conclusions about portability, and I'm not sure exactly where that confusion is coming from (perhaps those three fine fellows listed above, who often tend to write things obtusely).
We could try to talk about this on the stream some more if it would help.