OBASHI Think

See things clearly

Three assertions that I hold to be true:

  1. The understanding of the flow of data is fundamental to an organization’s financial well-being. [more]
  2. Business resources, including people and IT assets, are either providers of data, consumers of data, or they provide the conduit through which the data can flow. [more]
  3. IT exists for one reason, namely, to enable the flow of data between business assets. [more]

I’ve explained these assertions in previous blogs, so I’ll not drill into them too much here, but my reason for mentioning them is because in today’s world, the flow of data around a business is akin to blood flowing around the body.  Data is the life-blood of the modern business - without it, the business stops working, withers and dies.

 

Understanding which data flows support the various business processes in an organisation, gives us an insight as to how the business works, and puts the roles of the assets that support and carry the flows in a business context.

 

In my latest blog, “What explains the IT problems in banks”, I touched on the current lack of standards for showing how data flows through a business, and the impact this circumstance is having on today’s organisations. This is my take on how, with the rise of the IT Department, the documenting of flow has changed during the last 40 years.

 

Back in the early 1970s, there was no “IT” function as such, but centralised mainframe computers which were operated by boffins in white coats.  Documentation at the time related to the operational running of the computers. Individual programs were documented within the code, with brief functional specifications to detail what they did. 

 

With the widespread adoption of departmental mini computers in the early 1980’s, the “systems analyst” became the link between business and the computer department.  The need for documentation grew in tandem with the need to involve departments outside of the computer department in discussions about the tasks the mini-computer would perform.

 

As the systems analyst role became established, formalised methods started to appear as to how the analyst should capture and document how the business worked, and in the main, these centred on how data flowed through the various processes within the business. 

 

With the proliferation of mini computers came the local area network, and computers started sharing data and working together. Dataflow between assets became more established, as it broke free from the constraints of a physical mainframe.

 

In the 1980’s Data Flow Diagrams were introduced and popularised through the systems analysis and design techniques used by the systems analyst. Anyone with a computer science background from the period will recognise, and probably have been taught, the Structured Systems Analysis and Design Method (SSADM).  

 

A main component of SSADM was data flow modelling, to identify, model and document how data flows through and between systems.  SSADM was owned by the UK Government, and became best practice within the industry.

 

Typically, SSADM practitioners used the methodology devised by Gane & Sarson to depict data flow diagrams, but during the 1980’s alternative data flow methods began to emerge. 

 

Yourdon/DeMarco became popular because it was derived from a freehand “paper and pencil” technique.  The Ward & Mellor method gained ground because of its ability to map the real-time aspects of dataflow.  Programmers started using Jackson Structured Programming which lent itself well to programming in the procedural languages of the time.

 

In the late 1990’s the emergence of Object Oriented Analysis and Design brought with it Unified Modelling Language (UML).  This became a favourite with programmers who were moving on from procedural languages to object oriented languages, like C++, and the popularity of SSADM began to wain. UML placed less emphasis on data flow modelling and more on data relationships, software object definition, class structures and user interactions with systems.  UML, at its heart, was designed to allow programmers to build better and more maintainable applications.

 

From the 1980s to the present day, the shift has been from diagrams supporting systems analysis techniques that are closely coupled to the business, to diagrams that support programming techniques, which are closely coupled to the delivery and support of IT functionality.  

 

As IT has become a business function in its own right, the documentation it produces has evolved to focus on its own internal functions and delivery mechanisms, rather than on how the information technology itself supports people and business processes. 

 

While this evolution is understandable, and necessary in its own right, it hasn’t been helpful from a wider business perspective, accompanied as it was with a loss of clarity on how IT actually supports the business. 

 

Given the complexity of modern business and IT, it’s hardly surprising that the methods used by the 1980‘s systems analysts have been ill suited to cope, and have fallen out of favour.

 

But depicting how data flows around a business is still fundamental to understanding how the business works, and how the assets of the business link the flow together.

 

This kind of documentation shows clearly how the business is impacted when a link in the chain of assets that support a dataflow breaks.  Modern dataflow diagrams, like the Dataflow Analysis View (DAV) in OBASHI, provide exactly that.

 

Using dataflow modelling techniques helps put IT in context, a business context, which makes IT’s decision making much more effective, especially when it comes to the management of change.

 

But without such techniques, in most businesses today it’s difficult to make the best decisions. 

 

Here at OBASHI, we regularly mention “data flow disasters” in our blogs and tweets.  The incidents we highlight focus mainly on changes made to systems, which cause an interruption in the flow of data, and a subsequent loss of service to customers.

 

The highest profile data flow disaster recently is that at the RBS/NatWest/Ulster Banks where it appears that a software update change failed, causing massive disruption to customers.  But it’s not an isolated incident. 

In the past couple of weeks alone,

  • Glitches left customers of both O2 and France Télécom unable to use their mobile phones for extended periods
  • SalesForce.com has suffered two major outages, disrupting CRM systems all over the world.
  • A computer glitch at Germany's federal police headquarters deleted evidence on organised crime and terrorism cases

 

Meanwhile, the problems in banking IT continue, with some customers of both Lloyds TSB and Halifax unable to access their online accounts for a morning.

 

Are there lessons to be learned from looking outside of IT for inspiration on understanding and documenting flow?  We could look to electrical engineers who analyse the flow of electricity, or maybe to air traffic controllers who analyse the flow of planes, in a highly regulated industry.

 

As discussed last time, one of the best places to look is within the process industries, where successful product flow is critical to business success.  Every system and asset change is viewed and analysed, with flow at the heart of the decision making process.

 

Taking a process industry concept of how to document, model and analyse flow, and applying it to IT, might just be the answer to creating a standard for data flows.

 

After all, in the process world, a loss of product flow is seen as critical.

 

As modern businesses come to rely more and more on flows of data, a loss of data flow will become just as critical.

 

 

 

Views: 158

Tags: Current_Business_trends_challenges, Data_Flows, Operational_Challenges, Strategic_Challenges, Technology, data, delivery, growth, operations, technology

Add a Comment

You need to be a member of OBASHI Think to add comments!

Join OBASHI Think

© 2014   Created by Fergus Cloughley.

Badges  |  Report an Issue  |  Terms of Service