The Wayback Machine - https://web.archive.org/web/20120702025539/http://java.sys-con.com:80/node/2304729

Welcome!

Java Authors: Konstantin Polukhin, Liz McMillan, Hollis Tibbetts, Chris Evans, Peter Silva

Related Topics: SOA & WOA, Java, Cloud Expo, Security

SOA & WOA: Blog Post

Managing Risk in IT

There’s an old adage that says, if it ain’t broke, don’t fix it

The IT debacle at RBS has highlighted the dependency large financial organisations (and other companies) have on their IT infrastructure.  From what has leaked out into the press, the RBS issue relates to a piece of software called CA-7, used for mainframe batch job scheduling. When I first started in IT in 1987, CA-7 (and it’s sister product CA-1, used for tape management) were already legacy technology.  From memory, I believe CA acquired the products from another company; both had archaic configuration processes and poor documentation.  However they did work and were reasonably reliable.

If it Ain’t Broke…
There’s an old adage that says, if it ain’t broke, don’t fix it; meaning if the software works, why change it.  Any change inherently introduces risk; make no changes and you don’t introduce unnecessary risk.  However, IT infrastructure doesn’t run forever.  Change is necessary to accomodate new features & functionality and cope with growth.  Eventually vendors stop supporting certain versions of software and hardware as they entice and force you to upgrade and purchase new products.

The hardware risk profile is pretty well understood by most organisations.  As servers and storage for instance, get older then the cost of support increases as parts become more difficult to obtain (and more expensive).  There’s a tipping point where maintenance costs outweigh upgrade and new purchase and so justification can be made to replace old hardware.  There’s also a number of other factors involved for hardware, including space, power & cooling costs, all of which help create a reasonably mature TCO model which can be used as part of a technology refresh.

The Software Risk Profile
However, I’m not sure we can say the same for software upgrades.  Working out the risk profile for software is more complex.  Firstly, software has no equivalent of hardware parts replacement; software components don’t wear out.  Bugs do get discovered in code, however these usually get fixed with service packs and patches.

Going back to CA-7, this software originally ran in mainframe environments supporting perhaps hundreds or a few thousand batch jobs in an overnight schedule.  In an organisation like RBS, the software may be supporting tens if not hundreds of thousands of complex batch interactions.  These may have dependencies on platforms other than the mainframe, which make things even more complex.

It’s easy to see that too much risk had been concentrated into a single piece of infrastructure software, if a failed upgrade could result in such disastrous consequences.  When software becomes so complex, it’s likely that upgrades get deferred and deferred until the upgrade becomes critical.  Then a failed upgrade has massive consequences.

The risk of failure in this instance was clearly not understood.  The upgrade took place midweek to a system that seemed to cover the update of accounts to every customer in three banks.  With such a high risk profile, this change should have been scheduled for a quiet period such as a bank holiday.  The change and subsequent backout should have been covered by senior staff – The Register article implies junior staff were involved.

Finally, questions have to asked as to how a junior member of staff could delete the entire input queue updating millions of customer records, then requiring “manual” input.  This statement makes no sense or demonstrates huge flaws in RBS’ batch structure.

The Architect’s View
Software and application upgrades are complex and in large organisations that complexity can be one risk too many.  The desire to centralise to reduce costs shouldn’t be done at the expense of introducing excessive risk.  RBS (and probably many other financial organisations) need to reflect on their system designs and look to mitigate these kinds of scenarios.  From my own experience I know we could see another one of these incidents happen at any time.

Read the original blog entry...

More Stories By Chris Evans

With over 23 years' experience in the IT industry, Chris M Evans has spent the majority of his career consulting to large organisations on storage issues. Starting in mainframe, his knowledge extends to Open Systems & Windows environments, looking not purely at technology but also the wider business implications of the implementation of technology. Chris runs his own consultancy with a number of like minded and colleagues. He maintains blogs at www.thestoragearchitect.com and www.thevirtualisationarchitect.com as well as writing for online publications.

'); var i; document.write(''); } // -->
 

About JAVA Developer's Journal
Java Developer's Journal presents insight, technical explanations and new ideas related to Java and covers the technology that affects Java developers and their companies.

ADD THIS FEED TO YOUR ONLINE NEWS READER Add to Google My Yahoo! My MSN