OBASHI Think

See things clearly

Formula 1 racing is one of my favourite sports.

 

From a technical standpoint, what is really interesting about F1 is how data intensive the industry has become during the last 10 or so years.

 

Today, the driver, engineers and car never stop communicating with one another.  Everything the driver does with the car on the track, in testing or during a race, is monitored and recorded using telemetry: a data transfer system that allows remote measurement and reporting of data. 

 

Lots of data is picked up from sensors all over the car: engine revs, oil pressure, fuel flow, exhaust temperature etc., and transmitted to the team’s computers in the pits. 

 

Numerous channels record all the various parameters, and engineers then sit down and interpret the data feeds into more meaningful forms of information.  Their job is to identify the minute clues in the information that may lead to the driver saving a few precious fractions of a second per lap.

 

The teams have so much recorded telemetry data, that on test-rigs they can test the entire dynamics of the car, including simulations of how each racetrack affects the various individual parts.  The car can do ‘virtual’ laps, without leaving the test centre.  All of which means that under various conditions – racetrack, weather, surface - the car can be set up to perform optimally, based on the data the teams have analysed.

 

The quality of the data is key - without high quality data the teams would be working blindfolded.  So only the fastest, most accurate, highest quality sensors are used to capture data.

 

The way the data is filtered prior to storage is something that is also highly engineered, to maintain the quality of that data.  During filtering there is usually a trade off between the amount of data stored, the resolution it is stored in and the speed of access to the data.   You could store every piece of data received, but is that really necessary?  If we’re reading the tyre temperature every thousandth of a second and it doesn’t change significantly for 3 seconds why store the full 3000 values?  [more in slideshow.]

 

And a key issue is who decides what is significant for each sensor? 

 

Is it the IT guy in charge of the system?  Is it a race engineer with years of experience?  Is it the regulators who need to determine if a severe crash was caused by driver error, or mechanical/technical failure?

 

If “significant” is interpreted wrongly then valuable data is corrupted and lost, to the detriment of the team and therefore, the business.

 

To be able to make the best decision there are three things which need to be considered: where does the data come from, who uses the data, and for what purpose?

 

The manufacturer of each sensor provides a technical specification for their products which describes their accuracy.  If a temperature probe is only accurate to +/- 0.1 deg C then it is no use looking for a change of 0.05 deg C.  If that temperature reading is passed through an analogue to digital converter with an effective resolution of 0.2 deg C then there’s no point in using the manufacturer’s spec either. Understanding where the data comes from, and what assets it passes through, is critical.

 

Who uses the data and for what purpose?  Identifying who uses the data (the stakeholders) can be a challenge. There are usually some obvious groups of stakeholders.  In our F1 example, stakeholders could be race-side engineers, drivers, tyre manufacturers, aerodynamic designers back at the factory etc.  Each of these groups may require data of differing levels of accuracy due to the types of systems they use, and into which the data flows. 

 

However, there may well also be some non-obvious stakeholders.  How about the manufacturers of the re-fuelling rigs, or the makers of the wheel-nut guns?  Temperature information may play a part in their equipment design, and the data may be passed on to them, not from the teams themselves but from the FIA who regulate the sport and set the specifications for those assets. 

 

Understanding how the data is used, even when cascaded through multiple systems or companies, can affect the required quality of the data and, ultimately, who should contribute to the costs. 

 

And this is applicable to all businesses, not just F1.

 

It is by understanding where data flows from and to, and seeing all the assets it touches along the way, that allows us to make better decisions about data quality.

 

But it’s only when that understanding is given the context of who uses it and why, that sensible, credible and auditable decisions can be made about the data quality required by an organisation, and who should be paying for it.

 

 

 

Views: 316

Add a Comment

You need to be a member of OBASHI Think to add comments!

Join OBASHI Think

© 2018   Created by Fergus Cloughley.   Powered by

Badges  |  Report an Issue  |  Terms of Service