_ __ | |__ | | ___ __ _
| '_ \| '_ \| |/ _ \ / _` |
| |_) | | | | | (_) | (_| |
| .__/|_| |_|_|\___/ \__, |
|_| ...2017-07-27 |___/
They are lies, or at least, any freshly developed system is. My view is, that a
grand amount of care should be taken in designing and implementing a system, if
one should have any reason to hope it some day become secure. But careful design
and implementation alone is not enough to achieve security, it is only the very
important foundation on which security can be achieved. Security is not only an
inherent property of a system, it is an emergent property. This is true for all
systems, not only software systems, not only legal systems. Any orchestration of
events have an inherent capacity for unknown state. In short: Your code will be
buggy, your law will fall short and in time, the floor-plan of the house that
you helped design will reveal itself not to be absolutely perfect for that one
unforeseen use. So, now, I may be bold in saying this, but, systems will have
bugs, new systems will have many, re-implementations will have fewer and old
systems (if they are maintained) will have least. That is because, as a system
is used, examined and attempts are made to exploit it, more and more of those
unknown states become known and, lest the flaw is inherent to the design, fixed.
for stating the obvious, in response to a discussion about a recent
IT security scandal in Sweden (Denmark don't have IT security scandals, we
simply don't care enough about our critical infrastructure to call it scandal).
My comment was: "I have no trust that our politicians can keep our data safe.
We punish those who reveal sloth, the hackers, instead of amending our bugs
and sending them small presents for their help." - This was, only in part, meant
as a provocation. I know of several instances, both domestic and foreign, where
well-meaning people have been ignored by companies, when they point out security
issues. Even worse, I know of instances where attempts have been made to silence
those people, and worse yet, instances where those people have been prosecuted.
I was called by the radio station, and they asked if I would come on the show
and elaborate on my point, which I politely declined, not because I do not think
that my point is valid, but because, even though this is written after no small
amount of consideration, it becomes ranting, and I'd end up sounding (even more)
like a complete idiot if I attempted to convey my point before a live audience.
But my comment was read aloud, and the host remarked on it being an interesting
point of view, so it might be, in some small way, that I did contribute to the
public debate, maybe I sparked an interest in looking at software systems, not
as something that has to be perfect in the first try, but something that can be
improved upon in an iterative manner, to let stronger security emerge.
Domestic hacking should be not only free of punishment, it should be encouraged
and rewarded. Anyone who finds and documents a security hole should be paid, by
the company responsible for the software, the amount paid should increase for
every day after initial publication that the bug can be exploited.
If an exploit exist 24 hours after it was reported, its existence may be made
public. The company is responsible for any damage incurred by the exploit.
Any citizen who has their information compromised is entitled to full coverage
of any damage, meaning, for example, that if your personal identification number,
phone number, bank account, or even home address is made publicly available, the
company is responsible for providing you with new of these things. If you do not
wish to live on an address to which you can now be tracked, they better find you
a new home of the same quality as the one you've been forced to leave.
- This part is too ranting, and should be taken with several grains of salt.
It is not my intention to suggest that the developers of these systems are
incompetent, nor that they do not care about their craft. I believe
that the main problem here is the way management, legal and PR. departments
view these "problems". If, instead of trying (and failing) hard to look
perfect (more often than not, resulting in them looking like giant, arrogant
jackasses), companies as a whole, need to accept that their software will
have bugs, sometimes, really bad ones, and they need to be graceful in facing
them and serious in amending them.
The only way to avoid security-through-obscurity is to eliminate the possibility
for obscurity. Architectural diagrams, source code, documentation, it must all
be made public. The idea is not that more eyes will help eliminate bugs, we've
seen bugs hiding in plain sight in OpenSSL for years. But it will force a school
of thought were nobody tries to slack off. It's too easy to think "nobody would
ever figure out that this could happen, because they don't know how the system
is put together".
- This would likely remove or reduce drastically, the need for the second step.
I am not suggesting here that open-source solves security problems, simply
that the knowledge that the finer details of ones work will be public, makes
for a difference in approach to design and implementation. In cryptography,
how many proprietary, undisclosed algorithms are widely accepted as correct ?
End of rant