FDC
The POLARIS Suite
Dave Jesse on October 24, 2012
0

Before our slight air crash deviation in the last post, we were talking about what exactly Flight Data Software does. Today, we’re going to have a chat about how FDS’s particular Flight Data Software was developed, and where the name came from.

Where did POLARIS come from?

You may, or may not know, that FDS’s software is called POLARIS. Starting from an empty office, we recruited and trained a team of skilled software engineers to implement the ideas we had, which were borne from many years working in the air safety industry.

During their time with us, the engineers have selected the most advanced software tools, methodologies and platforms – indeed in some areas, such as the interactive web graphing, we had to wait for technology to catch up with our ideas. And it was with the engineers’ expertise that the POLARIS project came into being.

From the outset we intended to divide the project into three parts:

  • transfer of data from the customer to us,
  • analysis of that data and finally,
  • presentation of the results on a website.

The first and last of these parts face the customer, so this is where our efforts were concentrated over the first two years.

No more complex than using a toaster…

Our data transfer process is no more complex than using a toaster. Users put the media into a computer, wait, and take it out, already for reuse. Why make a simple task more complex? There’s a video here if you’d like to see it in action.

The marketing people now want me to say, “we can transfer data from all the different media types (except tapes, because you can’t buy tape decks now.)”

Enough marketing?

Good, back to the facts.

Viewing the Data

We decided from the outset that software you install on a desktop computer was on its way out and so POLARIS is designed from the beginning as a web application. (The next blog post will be about our open source project, so as a precursor to that, the web viewer is not included in the open source project simply because it’s huge and relies upon a very complex platform to operate.)

For the web site, our customers have provided an invaluable source of ideas, encouragement and feedback, especially for the usability, and we’d like to pay tribute to the contribution that they have made to the project. Is it successful? One US customer emailed our support team about the new website with just one word – “Awesome”.

Data Analysis

All data is converted into engineering units, validated, analysed and tested against event thresholds in separate stages. This allows flexibility in building the hosted suite (of which more in a future post). I covered data conversion and analysis last time, so let’s look at testing for events.

Testing is really simple. If the maximum permitted speed to run an engine is 100%, then we calculate a KPV for the highest speed of the engine. There will be one KPV per flight. Each engine speed KPV is checked against the 100% limit, and if it exceeds the limit an event is raised.

The beauty of this technique is that the distribution of engine speeds can be checked, and analysed for variation by the airport, aircraft etc to see if there are any places where the engines are running close to their limits. In this way we can use the data in a preventative manner, rather than waiting for an event to arise which may require expensive engine maintenance.

This step has taken us into the realms of statistics, where we need to summarise the huge number of KPVs in a manner that allows decisions to be taken. A few years ago printed statistical charts were considered excellent, but today’s managers expect interactive statistics and modern business intelligence tools allow interactive, web-based statistics with drill-down facilities.

Sorry, I can feel myself slipping into enthusiasm mode and must return to the task in hand! In the next post as already mentioned, I’ll get back to the community concept and explain why open source.

TTFN
Dave