See things clearly
“More data flow disasters are inevitable – which means, because we as individuals are ever more reliant on flows of data to live our lives, some of us are going to share that pain.
So don’t get comfortable...strap yourselves in. It’s going to be a bumpy ride.”
A couple of years ago I argued in “IT and the Steam Boiler”, that a lengthy period of serious data flow-related problems was only just beginning for government, business and society.
Why do I believe that to be true? Simply because compared to well-governed business flows such as steam, electricity or oil & gas, organisations do not have sufficient clarity on how data flows.
In today’s data flow reliant world, there are no standards on how people, process and technology operate together to enable flows of data. As complexity and connectivity increase, this means interruptions to data flow, caused by software glitches, hardware failure or human misconduct and errors, are becoming commonplace.
Proof? Take a look at some very recent ‘data flow disasters’ that have seriously affected people in various parts of the world:
When such events happen we as individuals are on the sharp end, but they obviously also have a serious financial and reputational impact on the businesses responsible for the data flow, and often on whole industries.
For example, in early December 2013 a critical telecoms system of the UK’s National Air Traffic Service (NATS) didn’t switch properly to ‘daytime’ mode. As a consequence, data flow between air traffic controllers stopped, and 1,300 flights (8% of all air traffic in Europe) were "severely delayed".
Estimating the overall costs to businesses of such an interruption to data flow is tricky, but in this case it will certainly have run to at least millions of pounds.
One airline, addressing industry regulators, stated publicly that the performance of NATS was “simply not good enough”.
In government, as connectivity increases both within and between organisations, it is often a challenge just getting data to flow as intended through complex interacting systems, some of which are often decades old. U.S Healthcare.gov and the UK’s Universal Credit system demonstrate that this can be difficult to achieve securely, both on-time and on-budget.
When government IT projects falter, with taxpayers money lost and political careers potentially at stake, all too often a blame game starts, heads roll and expensive lawsuits follow.
In the last few months California, Queensland and Massachusetts have all sued global IT companies over controversial IT projects. It remains to be seen if any of the parties will count themselves as ‘winners’ after the cases are over.
The few examples above demonstrate how ‘data flow disasters’ can negatively impact us as individuals as we go about our daily lives. And how the tiniest error in software code can damage businesses, and not just in financial terms. Governments meanwhile continue to try and reconcile policy with technical capability, but all too often come up short.
Until clarity is created on how data flows through people, process and technology, our ‘bumpy ride’ is set to continue for a long time yet.
Add a Comment