I know you can't rely on equality between double or decimal type values normally, but I'm wondering if 0 is a special case.

While I can understand imprecisions between 0.00000000000001 and 0.00000000000002, 0 itself seems pretty hard to mess up since it's just nothing. If you're imprecise on nothing, it's not nothing anymore.

But I don't know much about this topic so it's not for me to say.

```
double x = 0.0;
return (x == 0.0) ? true : false;
```

Will that always return true?

It is **safe** to expect that the comparison will return `true`

if and only if the double variable has a value of exactly `0.0`

(which in your original code snippet is, of course, the case). This is consistent with the semantics of the `==`

operator. `a == b`

means "`a`

is equal to `b`

".

It is **not safe** (because it is **not correct**) to expect that the result of some calculation will be zero in double (or more generally, floating point) arithmetics whenever the result of the same calculation in pure Mathematics is zero. This is because when calculations come into the ground, floating point precision error appears - a concept which does not exist in Real number arithmetics in Mathematics.

If you need to do a lot of "equality" comparisons it might be a good idea to write a little helper function or extension method in .NET 3.5 for comparing:

```
public static bool AlmostEquals(this double double1, double double2, double precision)
{
return (Math.Abs(double1 - double2) <= precision);
}
```

This could be used the following way:

```
double d1 = 10.0 * .1;
bool equals = d1.AlmostEquals(0.0, 0.0000001);
```

Licensed under: CC-BY-SA with attribution

Not affiliated with: Stack Overflow