Again, compiler fences don't offe...pleted (e.g. after you read the
Yes. (Agree)
In x86(_64), what guarantees that...enforced by the cache subsystem.
As I understand it, what you say here is not correct, and is why I posted this:
"8.2.3.4 Loads May Be Reordered with Earlier Stores to Different Locations"
There is the example to follow this section:
Processor_0: ;writes
mov [ _x], 1
mov [ _y], 1
Processor_1: ;reads
mov r1, [ _y]
mov r2, [ _x]
Initially _x = _y = 0
r1 = 0 and r2 = 0 is allowed
What this means, the way I see it, is that unless processor 0, use a write fence, after the two write instructions, errors will sooner or later happen. But if he does, then R2 and R1 will both read a 1. It may even be the case that processor 2 also needs a loadfence (Lfence) to be 100% sure to read the updated values.
When you want to read a value, th... all, and will have to fetch it.
Dont forget that CORES do not share caches. Fences are to make the local cache changes, "Globally Visible"
Citation:
SFENCE
"Description
Performs a serializing operation on all store-to-memory instructions that were issued prior the SFENCE instruction.
This serializing operation guarantees that every store instruction that precedes the SFENCE instruction in program
order becomes *globally visible*
"
I'm not sure what you mean by
...record to what to notify, etc.).
If you have 4 cores running, each working on the backbuffer in a software renderer, then you can use EVENT signaling in each
at the end of work, to say they are ready (done). You can then wait for this state to appear in all of them.
This way, you know that any read after the wait, can only proceed when all cores are done.
But unless each core has issued an SFENCE to make those last writes visible, then you will have
a high probability of "8.2.3.4" (see above).