For once, Bruce is not at the right end. Maybe not opposite of it, but.
As per this here blog post of his — a repeat of one of his, and others’, thread.
The argument: We make things, like, security, too difficult for users and hence (?) we shouldn’t try to change them into secure behaviour.
The contra: ‘Guns kill people’, or was it that the men (mostly) firing guns, kill people? And the many toddlers shooting their next of kin since, being at the approximate maturity of the Original gun pwner, they have no clue.
The Contra, too, and much more to the point when it comes to ‘information’ ‘security’: We should make cars run at maximum 5Mph … Since ‘users’ are waaaay too stupid to drive carefully.
Just don’t mention that ‘security’ is a quality not an absolute pass-or-fail thing, and that ‘information’ could not be more vague. [Except ‘cyber’, that’s so vacated of any meaning that it’s a black hole.] And don’t mentoin we still seem to let cars be used by any other moron that once, possibly literally decades ago before ‘chips’ were invented, passed some formal test — the American idea of the test coming very, dangerously, close to … was (sic) it the Belgian? system where one could pick up one’s driver’s license at the post office. Able, allowed, to buy cars that drive not just 5 but 250Mph, on busy roads, without protection against using socmed mid-traffic… One thing could be to introduce Finnish-style booking for unsafe behaviour (if caught, not when as per next paragraph [think that through…]), and/or huge fines for the producers of bad equipment (hw/sw) comparable to fines on car makers, or outright laws to build airbags in, etc.
And then, if we’d design ‘secure’ systems, e.g., the Apple way, we’d end up with even worse Shallows sheeple that have so much less clue than before… And all in the hands of … even in ultra-liberal countries one would suggest either Big Corp, or Big Gov’t, both options being Big Brother literally in such an atrocious Dystopia of humanity.
So, you want safe systems? You get the loss of humanity before actual safety.
[Yes I get the Humans Are The Cause Of Much Infosec Failure thing (including Human Flexibility Can (still!) Solve More Than Machines Can, Against System (!) Malfunction), but also I am completely in favour of both the Humans Must Through Tech Be Completely Shielded From Being Able To Do Anything Wrong and Humans Should Retain All Freedom To Act Responsibly solutions.]