O yeah! Let's hope for a quick delivery!kovarex wrote:the release is planned for friday, lets see what can we do
Friday Facts #35 The lighthouse keeper
Re: Friday Facts #35 The lighthouse keeper
Re: Friday Facts #35 The lighthouse keeper
You should never have to write your own trigonometry functionskovarex wrote:Right now, I'm writing my own trigonometric functions, so yes I'm kind of deep in kind of nonsense
http://stackoverflow.com/questions/2387 ... lculations
Googling about it a little finds this, although I haven't used it myself ... http://lipforge.ens-lyon.fr/www/crlibm/
PS. Are you using the same compiler for windows build too? If you're using MSVC, you might want to verify that you have floating point optimization mode set to fp:precise instead of fp:fast, and there might be other eventual inconsistencies with floating point accuracy due to differences in instruction sets used by the different compilers (i.e. one compiler might be using more accurate SSE registers and rounding to double).
Re: Friday Facts #35 The lighthouse keeper
The reason why I'm writing this is not to get better precision, it is not needed at all.
The reason is, that the result of sin/cos/asin/acos/atan etc. are different on different platforms, and I need the results to be exactly the same to achieve determinism.
The reason is, that the result of sin/cos/asin/acos/atan etc. are different on different platforms, and I need the results to be exactly the same to achieve determinism.
-
- Manual Inserter
- Posts: 4
- Joined: Sat Sep 28, 2013 6:20 pm
- Contact:
Re: Friday Facts #35 The lighthouse keeper
hi,kovarex wrote:The reason why I'm writing this is not to get better precision, it is not needed at all.
The reason is, that the result of sin/cos/asin/acos/atan etc. are different on different platforms, and I need the results to be exactly the same to achieve determinism.
why don't you use lookup tables to get an approximate value ?
Re: Friday Facts #35 The lighthouse keeper
Hm. I have just discussed this problem with a fellow, which is doctor in physics. He said for that type of problem those libraries are not a good choice, cause everything, which was formerly calculated on the FPU in one CPU-cycle is now calculated on the CPU in hundreds of cycles. He said, this is needed for other problems, but for a game it would be a killer.
We had no good idea about that. But we found some aspects, which might be worth thinking:
- Where is this problem relevant? I mean it is only relevant, if you have a floating point number, which is a result of some calculation from some operation, which has a big difference in it's dimensions. The result depends in that case very much on how you do the operation. This is a deep field in science (numerology, measuring etc.). So eventually in some cases some different algorithm works better. If this problem can be reduced to only some cases, it could be also a solution to calculate in two steps or with more precision.
- How did that others? I mean Starcraft had the same problem. And many other games, too. Or did they just replaced all calculations with integer calculations?
- A nearly equivalent problem is for example calculations in a sound program. I'm not sure, but I think in Propellerheads Reason, they have something, which enables them to make exactly the same calculation of a sound over and over. How do they those calculations for a simulated DSP? They need to do it, cause Reason for example calculates with 32 bit floating point!
@darkwater:
The idea sounds clever. But the problem is not that, cause sinus works in most cases still a bit like that. The problem, as I see it, is the calculation of longer formulas. Every single step needs to be rounded. And now, depending on, how/in which order such a formula is solved, the result differs. This is quite normal. And with step I don't mean just +, -, * etc. A step is for example calculation if the result is positive or negative, cause from that it depends, how the next step works.
We had no good idea about that. But we found some aspects, which might be worth thinking:
- Where is this problem relevant? I mean it is only relevant, if you have a floating point number, which is a result of some calculation from some operation, which has a big difference in it's dimensions. The result depends in that case very much on how you do the operation. This is a deep field in science (numerology, measuring etc.). So eventually in some cases some different algorithm works better. If this problem can be reduced to only some cases, it could be also a solution to calculate in two steps or with more precision.
- How did that others? I mean Starcraft had the same problem. And many other games, too. Or did they just replaced all calculations with integer calculations?
- A nearly equivalent problem is for example calculations in a sound program. I'm not sure, but I think in Propellerheads Reason, they have something, which enables them to make exactly the same calculation of a sound over and over. How do they those calculations for a simulated DSP? They need to do it, cause Reason for example calculates with 32 bit floating point!
@darkwater:
The idea sounds clever. But the problem is not that, cause sinus works in most cases still a bit like that. The problem, as I see it, is the calculation of longer formulas. Every single step needs to be rounded. And now, depending on, how/in which order such a formula is solved, the result differs. This is quite normal. And with step I don't mean just +, -, * etc. A step is for example calculation if the result is positive or negative, cause from that it depends, how the next step works.
Cool suggestion: Eatable MOUSE-pointers.
Have you used the Advanced Search today?
Need help, question? FAQ - Wiki - Forum help
I still like small signatures...
Have you used the Advanced Search today?
Need help, question? FAQ - Wiki - Forum help
I still like small signatures...
Re: Friday Facts #35 The lighthouse keeper
As far as I know, the best way to make sure everything is the same across all platforms is to remove all floating point calculations.
-
- Long Handed Inserter
- Posts: 66
- Joined: Sat May 10, 2014 8:48 am
- Contact:
Re: Friday Facts #35 The lighthouse keeper
This is a very good suggestion. The tables don't have to be large since you don't care about precision (say 1/10 of a degree per entry, use symmetry to shrink tables further). It will also be way less of a drain on processor resources than computing a truncated taylor series.darkweaver wrote:why don't you use lookup tables to get an approximate value ?
Re: Friday Facts #35 The lighthouse keeper
It used to be that computing was always better than memory op's, however this is changing as storage(specifically cache running similar to CPU clock) increases speed while human perception and interface remains relatively constant.
I wonder if/when CPUs will start allowing processes to load such lookup tables when the writer of the program can say that computations always fall into a static set of inputs. This could be especially nice for branching controls, allowing even shorter circuiting of predictive processing when wrong and having to jump...
Sorry this is all a bit off-topic, heh.
I wonder if/when CPUs will start allowing processes to load such lookup tables when the writer of the program can say that computations always fall into a static set of inputs. This could be especially nice for branching controls, allowing even shorter circuiting of predictive processing when wrong and having to jump...
Sorry this is all a bit off-topic, heh.
Re: Friday Facts #35 The lighthouse keeper
Can't you just make calculations with some reserve and after every calculation step cut off last digits to some level of guarantied precision? Maybe you'll get less final precision that you want but this way you can safely use FPU methods.
Re: Friday Facts #35 The lighthouse keeper
This looks like easy solution, but it is not correct actually.Noro wrote:Can't you just make calculations with some reserve and after every calculation step cut off last digits to some level of guarantied precision?
Logically whatever kind of rounding you use, you are basically assigning group of numbers to other numbers. But you will always have borders, where one number (in computer floating point terms) belongs to one group, while the number next to it belongs to different group, which means, that even small lack of precision can change the result of rounded results.
The example is
You get the result 0.9999999999 on one system and 1.0 on the other. The difference is 0.0000000001, but when I round both to lets say 0.1 precision, I still get 0.9 versus 1.0.
Re: Friday Facts #35 The lighthouse keeper
Wouldn't that technically be truncating (0.999 -> 0.9), instead of rounding (0.999 -> 1.0)? The terminology is similar, but there is quite a difference.You get the result 0.9999999999 on one system and 1.0 on the other. The difference is 0.0000000001, but when I round both to lets say 0.1 precision, I still get 0.9 versus 1.0.
Although I might be completely wrong :/
Re: Friday Facts #35 The lighthouse keeper
You're right, but Kovarex was replying to noro, who talked about "cutting off" digits.
Ignore this
Re: Friday Facts #35 The lighthouse keeper
The same logic can be applied also to rounding, it is also division of the set of double numbers to smaller set, just the examples of border numbers are different.WiduX wrote:Wouldn't that technically be truncating (0.999 -> 0.9), instead of rounding (0.999 -> 1.0)? The terminology is similar, but there is quite a difference.You get the result 0.9999999999 on one system and 1.0 on the other. The difference is 0.0000000001, but when I round both to lets say 0.1 precision, I still get 0.9 versus 1.0.
Although I might be completely wrong :/
0.049999999999999
versus
0.05
One will be rounded to 0.0, the other one to 0.1, difference is again only 0.0 (...) 1
Re: Friday Facts #35 The lighthouse keeper
I would also try some kind of precomputed table. I did some experiment and using linear interpolation between point would require two constants at 5610 points to approximate sin and cos at 1e-8 (taking symmetry into account). This would probably not be very cache friendly especially if several additional functions are needed.
I tried with second order polynomial (a+b*x+c*x*x) and we only need 150 points or less (I didn't seek the optimal coefficients) with tree constants each to achieve the same level of precision. this may be a good compromise.
I made a small IJulia notebook with graph comparing those two approaches and taylor series expansion: https://dl.dropboxusercontent.com/u/176 ... 81%29.html (warning, it is quick and dirty and my first use of julia)
a similar approach could be taken for the other needed function. for tan, I would use asymptotic approximation when x -> pi/2. For atan, when x>threshold, can also use an asymptotic approximation.
Obviously, the choice of approach depend on if you really need 1e-8 precision, if you need to have continuous derivative or not in which case cubic spline may be better.
I tried with second order polynomial (a+b*x+c*x*x) and we only need 150 points or less (I didn't seek the optimal coefficients) with tree constants each to achieve the same level of precision. this may be a good compromise.
I made a small IJulia notebook with graph comparing those two approaches and taylor series expansion: https://dl.dropboxusercontent.com/u/176 ... 81%29.html (warning, it is quick and dirty and my first use of julia)
a similar approach could be taken for the other needed function. for tan, I would use asymptotic approximation when x -> pi/2. For atan, when x>threshold, can also use an asymptotic approximation.
Obviously, the choice of approach depend on if you really need 1e-8 precision, if you need to have continuous derivative or not in which case cubic spline may be better.
Re: Friday Facts #35 The lighthouse keeper
I'm losing my mind hitting the refresh button. It's friday. It's friday. It's friday. It's friday. It's friday. IT IS FRIDAY. DO IT. DO IT NOW.
edit: If someone has knowledge of the 0.10.0 release date being pushed back further than today, for the love of god put me out of my misery.
edit: If someone has knowledge of the 0.10.0 release date being pushed back further than today, for the love of god put me out of my misery.
Re: Friday Facts #35 The lighthouse keeper
It looks like the developer as fallen asleep before pressing the "press" button xD
Re: Friday Facts #35 The lighthouse keeper
As many bugs were marked as fixed in the last hours, i'd say devs have not yet given up the race.
Re: Friday Facts #35 The lighthouse keeper
therapist wrote:I'm losing my mind hitting the refresh button. It's friday. It's friday. It's friday. It's friday. It's friday. IT IS FRIDAY. DO IT. DO IT NOW.
edit: If someone has knowledge of the 0.10.0 release date being pushed back further than today, for the love of god put me out of my misery.
+1276713
- SpaceMushroom
- Burner Inserter
- Posts: 5
- Joined: Wed May 21, 2014 8:05 pm
- Contact:
Re: Friday Facts #35 The lighthouse keeper
This!!! Awaiting deploymenttherapist wrote:I'm losing my mind hitting the refresh button. It's friday. It's friday. It's friday. It's friday. It's friday. IT IS FRIDAY. DO IT. DO IT NOW.
edit: If someone has knowledge of the 0.10.0 release date being pushed back further than today, for the love of god put me out of my misery.
-
- Smart Inserter
- Posts: 1847
- Joined: Sun Feb 23, 2014 3:37 pm
- Contact:
Re: Friday Facts #35 The lighthouse keeper
Feel your pain, I was really looking forward to it today. It is now Saturday morning where I live, hopefully it'll be out when I wake up. Eager to do a video on it.therapist wrote:I'm losing my mind hitting the refresh button. It's friday. It's friday. It's friday. It's friday. It's friday. IT IS FRIDAY. DO IT. DO IT NOW.
edit: If someone has knowledge of the 0.10.0 release date being pushed back further than today, for the love of god put me out of my misery.