Well it’s always fun to find new things and this week was no exception. An aircraft manufacturer who shall remain nameless (but the adroit readers may be able to identify) supplied sample data for a new aircraft, along with 390 pages of excellent documentation. Most data conversion went well, but after some time struggling with the latitude and longitude data conversion we realised that something was very odd and some serious head scratching was needed.
So here are the clues that point to the scene of the crime. OK, it wasn’t Colonel Mustard in the Dining Room with the Lead Pipe, but almost as much of a mystery.
Firstly, although the data source is to the normal ARINC 717 format with 12-bit words, the latitude and longitude were recorded using more space than usual. It is normal to see three words holding both parameters, thereby giving 18 bits to each. On some newer aircraft two words give 24 bits to each parameter and so we get better resolution. In this case a full 32 bits was made available to each of the latitude and longitude. This was unusually generous.
Secondly, the resolution was given as 0.000000168 degrees per bit. Using the fact that a nautical mile covers one minute of arc (true for latitude, and longitude at the equator) and there are 1852 metres per nautical mile, we can compute this resolution to be
Or to put it another way, 1.9cm. This is astonishing accuracy for an aircraft navigation system. A runway light is ten times bigger than this, so if this resolution was true, we would be able to show which part of the light we ran over, not just was the aircraft left or right of the centreline.
Thirdly, none of the scaling factors we applied came up with an answer on an airfield. That is, we could convert the airspeed and heading and see where the aircraft was on the ground and stationary, but this data multiplied by any binary fraction of 90deg resulted in locations over the water, in a field or up a mountain.
Fourthly, we found some data where the aircraft was flying due north and by integrating the groundspeed could estimate the distance travelled. This gave an estimate of the resolution of 0.000229 nm/bit, more than a thousand times more coarse than given in the documentation, but far more realistic.
Fifthly, we could see that the data clearly related to a test flight and included an octagon where the aircraft was flown for a fixed period on each of the eight points of the compass. This should result in a similar change in latitude and longitude, but the change in the recorded longitude number was about eleven times greater than the change in the recorded latitude number.
As with all things, it’s obvious when you know the answer. I was tempted to leave this until next week’s blog, and offer a prize for the person who guessed, but there are probably some people who made the system who know the answer so here it is. The data is in IEEE 754 floating point format.
On Floating Point
To quote Wikipedia verbatim: “The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point computation established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE)”. A quick mathematical bit to explain floating points. In decimal, we have already seen 0.000000168 as the coefficient. This can also be written as
This is a more easily read presentation and shows that the decimal place has been shifted seven times (easier than counting the zeros). Combine this with positive and negative numbers and any number like this can be assembled from:
I was curious about why the 1.68 bit is called the mantissa. This comes (says Wikipedia) from the Latin meaning “something added” because if you take the logarithm of the value, the power of 10 is -7 (the exponent) then the bit you add on to the logarithm is the 1.68 part, log10(1.68)= 0.225. Those old enough to have thumbed through tables of logarithms will understand.
Unsurprisingly we can do the same things in binary as in decimal notation, and the result is
The result was that the recorded values (latitude = 3267763, longitude = 11225584) converted to position (43.434277, 5.2277756) which is, evidently, a helicopter pad at an airport in southern France:
The issue about using a floating point representation is that the resolution changes as the value changes. When the exponent increases or decreases, so does the value of the mantissa and (because the mantissa is recorded with finite resolution) the resolution of the recorded data is affected. So at Marignane above (oops, I gave the game away) the resolution in longitude is about 70cm. Take this same aircraft to Darwin in Australia and the resolution is 173cm – less than half the accuracy.
The resolution quoted in the documentation (1.9cm) was not achieved in Marignange (70cm) and would get worse the larger the latitude or longitude is. For example, in Darwin the resolution would be degraded to 173cm, and in the worst case airport I could find, Anadyr-Ugolnyye Kopi, it would be 397cm. It’s an interesting example of how floating point representations of a parameter have changing resolution with value.
After drafting this blog, I kept thinking about why we got almost exactly eleven times the change in binary value for longitude compared with latitude when the aircraft flew a circle. Here is the representation of latitude at the helicopter pad above:
And here is the longitude:
The difference in exponent is 25:22 = 8. So for a similar change in angle, the mantissa will change eight times more for longitude than for latitude as it is being multiplied by a smaller exponent.
At 43.43 degrees north, the lines of longitude are closer together by a factor of cos(43.43deg)=0.728, so a movement east produces a change in longitude which is 1/0.728=1.37 times more than the change in latitude for the same movement north.
These two factors together produce a ratio of 10.99, which match the eleven times ratio I found from experiment.