About using intrinsics

If you are wondering about whether or not to use the compiler switch for intrinsics, this might be an interesting read for you: http://randomascii.wordpress.com/...-error-bounds-by-1-3-quintillion/

Basically, the fsin instruction may be extremely inaccurate and it seems like libc vendors have been switching back to implementing sin on their own.

Edit: Depending on what you are using the sin for, this might of course be totally acceptable.

Edited by Thomas Schaub on
I read that article. But what if the problem is in his print routines? :)

And what are the cases where this could hurt you? And why has it not been seen before? I dont know but I am not sure if I have confidence in this One article... What exactly must I do to verify it? And for which cases do I need to verify?
I spent a couple of minutes writing an answer, but let's just say my mum wouldn't have been proud if she saw it. Instead, let me just recommend reading the article again, virtually all of your questions are answered there. For example, there is a link to Intel, confirming the issue.

Edited by Thomas Schaub on
Larry_Croft
I spent a couple of minutes writing an answer, but let's just say my mum wouldn't have been proud if she saw it.


:))

Larry_Croft

Instead, let me just recommend reading the article again, virtually all of your questions are answered there. For example, there is a link to Intel, confirming the issue.


Digital computing suffers the "the table-maker's dilemma", and far as I can tell the failure is to communicate this in the manual?

I am not an expert on floating point, but this topic seem to me extremely important from a another standpoint. Because it means computers, never can accurately simulate reality. Reality always calculate with an 100% exact PI. And computers can never do this? I think that's pretty interesting.

Sorry if I misunderstood.
The manual was wrong/imprecise, whatever you want to call it, and that is probably the most important point. To be more specific, the stated precision was wrong. Most programmers (and often times even users, think 10 / 3 * 3 in a cheap calculator) are well aware of that fact that computers normally operate on approximative values. This is reflected in the fact that there was a precision given in the first place - the value just happened to be wrong.

Besides the docs, the article talks about whether or not this can be considered a bug in the CPU. In a nutshell, since IEEE is supposedly unspecific about sin, it can't be really considered a bug. Ultimately, the docs have been updated and the standard libraries do not use the instruction (at least not by default afaik).

On the philosophical part: You can do something called symbolic computation to compute exact numbers. To give some intuition about that: If you do math with pen and paper, you would probably simplify something like e^2 * e^3 to e^5 because you know some algebraic rules. You can teach a computer these rules so that it only has to approximate a value when you want to see its decimal representation. If that's sufficient for an accurate simulation of reality, I don't know...
Larry_Croft
On the philosophical part: You can do something called symbolic computation to compute exact numbers. To give some intuition about that: If you do math with pen and paper, you would probably simplify something like e^2 * e^3 to e^5 because you know some algebraic rules. You can teach a computer these rules so that it only has to approximate a value when you want to see its decimal representation. If that's sufficient for an accurate simulation of reality, I don't know...


Yeah, Computer Algebra Systems are definitely a thing (and a very useful thing at that! Thank you, TI-Nspire!) though not yet at the point where they can be used for real-time calculations. That being said, I've just learned recently about Taylor series, which make me a lot more optimistic about how accurately computers can model reality. They just need to be able to evaluate polynomials to as many terms as the simulation requires!
Since we brought up the philosophy of symbolic computation, it might be worth a quick look at the formal limits of this.

WARNING: Some advanced computer science follows. If you've completed an undergraduate degree in computer science, you will probably be able to follow most of this. Don't feel bad if you get lost; just skip to the next comment.

The theoretical setting for reasoning about various systems of numbers is "SAT modulo theories", or SMT for short. You start with classical boolean logic (which is the SAT part; the satisfiability problem is known to be NP-hard), and build your theory as a set of axioms on top of it.

The theory of natural numbers (0, 1, 2...) with addition and multiplication is known as Peano arithmetic. It is known to be undecidable; this is Gödel's first incompleteness theorem.

You can weaken the system and get decidability back. If you drop multiplication, you get the theory of Presburger arithmetic. Presburger showed that this system is complete, consistent, and decidable.

What's interesting is that real numbers are, in a theoretical sense, easier to work with. The theory of Tarski arithmetic is the theory of real closed fields: real numbers with all of the field operations (addition, subtraction, multiplication, and division) is decidable.

You don't want to use Tarski's original algorithm for doing this. The runtime of his algorithm is NONELEMENTARY, which leaves exponential time in the dust as far as impractical algorithms go. There are better algorithms (DEXPTIME) which are hard to understand if you're not well-versed in the area. Nonetheless, the point is that the system is decidable.

Do you want to if the positive square root of 2 is greater than the positive cube root of 3? Well, this is a closed formula which asserts it:

∃x. ∃y. x⋅x⋅x = 3 ∧ x > 0 ∧ y⋅y = 2 ∧ y>0 ∧ x < y

Tarski gave an algorithm into which you can feed this formula, and it will return "true" or "false".

(If you find it surprising that real numbers are more "decidable" than natural numbers, consider that linear programming over the rationals is "easier" than linear programming over the integers. If you know why that is, then perhaps this makes a bit more sense.)

So the next obvious question is how far you can push this system and still retain decidability. Tarski showed that if you include the trigonometric functions, then the system is no longer decidable. Why? Because you can "construct" the integers: Z = { x/π : sin x = 0 }. The integers are undecidable, so this system is too. So... yeah, we'll always need help with trigonometry.

As a final aside, Tarski conjectured (as far as I know it's still an open conjecture) that the theory of real closed fields with e^x is also undecidable, and the reason for this is because of what happens in the complex numbers. If you have e^(ix), you have sin and cos, and hence you can construct the integers.