Saw a Windows Calculator bug on reddit. Since calc.exe was open-sourced I thought I’d try to find the bug and fix it. Cloned the code, recreated the bug, and found a minimal fix.
Exactly what it says on the tin.
Saw a Windows Calculator bug on reddit. Since calc.exe was open-sourced I thought I’d try to find the bug and fix it. Cloned the code, recreated the bug, and found a minimal fix.
Exactly what it says on the tin.
Great work but personally I hate the Metro interface. Although I use Arch / XFCE most of the time, I’ve one machine which I provided with the Old calculator:
https://winaero.com/blog/get-calculator-from-windows-8-and-windows-7-in-windows-10/
Unsigned values are a sin, since they can overflow, especially when you deal with relative values that can be negative…
Signed values also overflow. They are better at handling differences, though.
Yeas they can overflow as well, but usually coders hardly use the full scale of an unsigned value, more so using ‘unsigned int’ (0 to 4294967295) so it’s safer to consider the signed alternative instead. But sure you might find people that disagree with me and put unsigned values on a pedestal using a few selected cases picked out of context. From my experience, sticking to signed values as much as possible avoided me many headaches, using unsigned values only with care. Keep in mind that signed values can show 3 states ( 0) that can map to error, ok and info respectively. Unsigned values can only map to 2 states. Plus adding (+) a negative value inherently performs a subtraction, while you have to do specifically a subtraction with unsigned values, most of the time requiring a value check to select the arithmetic operation to perform. Signed values simply many aspects of a software. Physical units are signed values like iPhone are round cornered, that’s how nature works.
Kochise,
I’d say a significant part of the problem is C’s lack of overflow detection. As elementary as it sounds, it is notoriously difficult to detect overflow given both C’s undefined behaviors and lack of overflow flags.
https://stackoverflow.com/questions/2633661/how-to-check-for-signed-integer-overflow-in-c-without-undefined-behaviour
https://stackoverflow.com/questions/199333/how-do-i-detect-unsigned-integer-multiply-overflow
The easiest option is often to use oversized output data types and constrain the inputs, but really it stems from a deficiency of the language. Assembly programmers don’t have this problem since they can trivially use the CPU flags as needed, but C did away with that because real men don’t need overflow checking.
/sarcasm
How is distance (a physical unit) ever negative? The distance between two points is always positive, or zero (degenerate case), no matter which point you start from. Same goes for perimeter, circumference, and surface area.
IOW, show me an iPhone with a negative surface area, or even a negative side length.
Just like time can be negative. When you’re going backward, regarding your reference, your speed can be negative, hence the distance you’ve journeyed can be negative. That simplify the arithmetic of distance calculation when you sum up all your travels. Same for areas or volumes when you do constructive solid geometry.
Koichise,
A bit pedantic, but you are referring to velocity rather than speed. In physics speed is the magnitude of velocity and is always positive or zero. Although I realize in practice many people ignore the distinction.
As for distance, the formula for measuring distances between coordinates explicitly squares the offsets. So unless you’ve got imaginary or quadrature numbers in the input (which is something you’ve got to explicitly support), it will always eliminate negative output.
The difference of two arbitrary times could be positive or negative, however if you were to measure the difference of times taken in sequential order (ie stopwatch time) then that should always be positive. This may need to be qualified with “from a constant frame of reference” in relativistic physics, but in practice for daily use that’s a given for typical applications.
Obviously it all depends on the application. As programmers we could treat all numbers as signed integers (or doubles or even quadrature vectors), but it doesn’t necessarily mean our variables should have those components, it depends on the specific application.
Is it harmful to let data types be a superset of what’s needed? Perhaps not in principal, but in practice the implementation of signed numbers decreases the range, which may increase the likelihood of overflow. If your code/language is checking properly for overflow, then there isn’t an advantage to using signed variables “just in case”. Using floating point by default has similar issues.
This is an interesting discussion 🙂
@Alfman, perhaps the example wasn’t the best one. Take temperature, it is a physical unit that somehow cannot be negative (Kelvin) but can also be considered negative (Celsius, Fahrenheit) depending of your reference point. That time difference shall be always considered positive is not true. Don’t people generally consider B.C. et A.D. to express a time below (before) and a time above (after) in a more literal way ?
Just the test of life : have coders ever complained about such issues with floating numbers ? Ok, those cannot really overflow in some way, but simply always having signed values already solves quite a load of troubles.
Kochise,
Those are interesting examples, of course I only measure absolute time since the big bang, haha. I don’t know that we could come up with any hard and fast rule that applies to 100% of applications, it depends what we’re after.
You’ll definitely see complaints about floating point errors once and a while. Also I once converted time units into float variables figuring that floating point seconds would be better for my application, and defined a unix time function that used seconds rather than microseconds. However I hadn’t factored in that a single precision float only has 23 bits of precision. Unixtime (billions of seconds past the epoch) requires 30.5bits of precision to return time accurate down to the second. Needless to say a single point float has insufficient accuracy to represent that large a number in a meaningful way and the results were useless. Floats would have been fine to represent the elapsed time that I was interested in measuring (ie t1-t0), but they were inadequate for representing the elapsed time since the unix epoch (ie t1 and t0). The easy fix was just to use a double instead, but using monotonic clocks might be a better solution in some cases.
Speaking of time and signed versus unsigned ints…
Unixtime will overflow 32bit signed integers on 01/19/2038.
Unixtime will overflow 32bit unsigned integers on 02/07/2106
Can I also bugfix Windows by switching back to XP?
No one prevents you from using XP. In fact, I do on my old Via C7 that still works like day one (and better with the updates). Of course it’s off the internet.
I think I have found a bug in Windows calc (calculator) also in Windows 10. As I am using it for some government deductions which comprises of 37 brackets, I copied the results of each of the computation in order to put it on a web app. Moments later, pressing CTRL+C to copy the result no longer works. I will test this further to confirm that this really is a bug.