Premium
How to handle reliability of (software) systems that fulfill critical roles in society?
Author(s) -
Brombacher Aarnout
Publication year - 2013
Publication title -
quality and reliability engineering international
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.913
H-Index - 62
eISSN - 1099-1638
pISSN - 0748-8017
DOI - 10.1002/qre.1532
Subject(s) - hacker , payment , cash , reliability (semiconductor) , business , balance (ability) , quality (philosophy) , denial , computer security , field (mathematics) , computer science , internet privacy , actuarial science , finance , psychology , power (physics) , philosophy , physics , mathematics , epistemology , quantum mechanics , neuroscience , psychoanalysis , pure mathematics
I n the last 2weeks, several banking systems in the Netherlands suffered from, what you could consider, problems that seriously affected the Dutch society. When looking into their bank account using online banking systems, invalid entries appeared; valid entries had been removed, balance totals were wrong, and people trying to pay in the supermarket or at other locations had the message ‘invalid balance’ in spite of the fact that there was more than enough money in their account. Also, ordering products online was often impossible, either due to the above problems or simply because the online payment modules were simply unavailable. After a while, one of the cabinet ministers suggested that every citizen should have adequate cash available ‘just in case’. This problem lasted for about 3 days during which the banks offered apologies and several, not always consistent, explanations. After a while, the messages became more consistent, and it appeared that the banks had been suffering from massive ‘Denial of Service’ attacks by, as yet unidentified, hackers. The event is very interesting for professionals in the field of Quality and Reliability. First of all, it demonstrated how strongly modern society depends on modern high-tech (ICT) systems. Although final calculations have not been made, the immediate material damage is certainly over €10m and is probably (much) more. Second, it shows how difficult these problems can be predicted and/or handled. Although these banks have entire departments working on the analysis and prevention of problems such as this, the problems still happen. This is probably the reason that companies are currently also looking for unconventional approaches for the analysis and prevention of problems such as this. It is currently not uncommon that ‘experienced hackers’ are hired as consultants to analyze the vulnerability of critical systems. Although the use of these consultants seems to be quite effective, I wonder whether this is the right way to proceed. Let us, for the sake of argument, assume that this approach is also used in the protection of hardware systems. This would imply that people that have proven experience in damaging third party systems are hired as consultants to prevent exactly what they have carried out before. Ethically, this seems similar to ‘rewarding the offender’. Unfortunately, when dealing with software systems, it seems, at this moment, to be one of the very few effective strategies. Personally, I would be more in favor of a combination of two other strategies: (i) designing systems that are, structurally, far more robust to (deliberate) misuse by others even if this would mean that functionality would be reduced and (ii) developing better methods and models that can analyze the vulnerability of systems under a wide range of (adverse) operational conditions. This means probably a lot of work for us as professionals in this field but, personally, I would prefer this over hiring ’experienced hackers’.