The CIA of information security doesn’t cut it anymore. We have relied on Confidentiality-
Integrity-Availability for so long, that even ‘managers’ in the most stale of government departments now by and large know of the concepts. Which may tell you that very probably already by that fact, the system of thought has been calcified into ineffectiveness.
At least we should reconsider where we are, and where we’d want to go.
Lets tackle Confidentiality first. And maybe foremost. Because it’s here that we see the most clear reflection of our deepened understanding of the value merits of information not being in line with the treatment(s) that the information (data!) gets. Which is a cumbersome way to formulate that the value estimation on data, and the control over that data, is a mess.
Add in the lack of suitable (!) tools. User/Group/World, for the few among you who would still know what that was about, is clearly too simple (already by being too one-dimensional), but any mesh of access as can (sic) be implemented today, makes a mess of access rights. Access blocks? Access based on (legitimate, how to verify) value (s), points in time, intended and actually enabled use, non-loss copyability, etc.?
But what is the solution ..?
Not just a redo of the normal thing.If only it were normal. To establish the value of ‘confidentiality’, then label the data with a H/M/L (or 1-2-3) rating, and then securing it accordingly. Which is wrong in so many, many ways. Vincent Damen’s thesis already had the Bow Tie effect as a side effect error: For any data (set) or system, all H threats on Confidentiality are put into one basket, and all sorts of, mostly technical and ‘organisational’ (i.e., empty paperwork) protection measures are put in place for those. Not discriminating which measures are meant for which H risk data let alone which H threats. Put all the H on Confi data/systems into the same category, and the picture blurs further.
But we already glossed over the distinction between data value (and the time variance of that, information value seems to be perishable; and the non-loss copyability!), the enormously differing value for various holders and recipients, transaction costs everywhere, threat levels (don’t start me on the universe of potential threats, that is continuous in all dimensions so we’ll never be sure to be complete), inherent vulnerabilities (hmmm, however, this comes to intellectually close proximity of value…), degree and quality of implementation of measures, their effectiveness in the mesh of threats, mesh of measures, and mesh of vulnerabilities, where vulnerable stuff (e.g., humans) may also be part of your threats, and of your measures; and sweep in the feed forward but in particular also feedback loops, time-variant and all.
My point here: Data and systems have different values for the various creators, holders, transporters, recipient/users, and others involved, varying in time, too, and varying for the combinations of creators, holders, transporters, users, and others. One data point today may be innocious but tomorrow it may be a privacy nightmare even to become aware that it’s out there somewhere where it could be abused. The same data point may have too little value to you to protect it much, today. But for an ‘attacker’, it may have more value (than you (can) realise or get out of it) than the cost effort of the attack, so it may be worthwhile to try to get at it. But still, you don’t want the data point to get breached. It’s just that you have no clue about the value and protection levels and effectiveness, everywhere, all the time for all your data points (combinations).
Again, we need a re-think.
This goes in similar ways for Integrity, too. With an angle. Because the ‘technical’ integrity can be controlled quite well, by means of check sums and error correcting codes, covering for glitches deep in the technology.From a technical perspective, integrity can be guaranteed for a full 100%.
But already at the software level, so many things can go wrong. The specs can be wrong, leading to the wrong operations on certain data, leading to the wrong outcomes vis-à-vis the intended ‘logic’ and subsequent use as ‘information’. The data could be entered wrong, even when with the best intentions to fit some fuzzy data format into the predefined harder format in the system (software). Allowing for the hard required flexibility will inevitably lead to even
more flexibility requirement being found out later. Wrecking the point and referential integrity. Again, delivering output that differs from the ‘correct’ ones. At, say, above-OSI levels, things get worse, much, much worse, because human interpretation is involved as sole processor and integrity ‘guardian’. ‘nuff said about things being worse.
And what would you want to do about this all? For software, we have a most partial measure in Testing, and Change Management, but these apply to actual errors only, having assumed a ‘correct’ starting point quod non. And the processes are so wraught with error and omission at all abstraction levels, that their usefulness is limited; the understatement for this year, and next year, in one. Total data integrity checks? Hah, be my guest. Are you the fool that thinks that referential integrity checks in relational databases will solve it all? Are you the fool that doesn’t understand that much data is out of databases, abstractly embedded in software code, in software parameters, in unstructured data, in Big Data? Etcetera.
Here, too, we need a re-think of what we want out of ‘securing the integrity’. Integrity of Meaning ..? I had a blog post on the information pyramid already below, much to be worked on, and much to be established in the realm of science altogether.
[No, no pretense I’ll write it all out in a magazine article sometime soon, though I’ll give it a shot. Better to fail knowingly and brilliantly than not to try at all – and get shot down all the same.]
We’ll need a better understanding of what we mean by Integrity at all abstraction levels, and build an OSI-model for integrity. Any takers?
Finally (?) then, Availability. The odd one out, as we can do so much at the technical level, and short-term (!) availability is so much the opposite of Confidentiality. The picture begins to blur again at the software layers. Is a system that won’t start, available? Depends on who you ask. Is a system that starts, but produces erroneous results due to software errors (either in executable code, or in code in memory), available? Yes again, in a technical sense. Is a system that works all fine, but doesn’t have the functionality that a user would need out of her/his tool (sic) for proper discharge of function, available? At what level would we regard end user operators as part of the ‘system’ that produces reports to higher-ups, and what if those end users aren’t there due to some flu or traffic breakdown?
So, we have a mix-up of abstraction layers throughout any element and aspect we pick, and all elements and aspects play it out together. Looks disparagingly like the real world…
More will follow, here and elsewhere. If you have any good pointers; I’m glad to hear from you!