WASHINGTON – Today, the White House Office of the National Cyber Director (ONCD) released a report calling on the technical community to proactively reduce the attack surface in cyberspace. ONCD makes the case that technology manufacturers can prevent entire classes of vulnerabilities from entering the digital ecosystem by adopting memory safe programming languages.
Let me be clear: There is no such thing that can be assured beyond the level of individual code.
The premise here is that if something is "memory safe" then an attacker cannot cause an overflow or underflow that results in arbitrary (conveniently chosen by the attacker) code to be executed. It sounds like the basic principle is "well, just protect the environment so that can't happen when the code is written, and all is well."
The problem is that the premise is false but not for the reason you think it is.
All software is written by people (or machines at the direction of people.) Humans (and the programs written by them, as a result) contain errors. We call them "bugs" in the computer vernacular.
If I write a piece of software and it contains a memory-unsafe error then that software can potentially be caused to execute something that I did not intend, but an attacker intends, by exploiting that error. If that program has the capacity to escalate privilege then it is especially bad because the compromise can extend beyond the software in question to anything else running on that particular computer. In a "cloud" environment this is especially nasty because other people's software unrelated to me may be running on it as well, and that means that, in theory, my bad code can cause your secutity to be compromised even though you are uninvolved or even unaware of me.
It sounds like the answer is to use a programming language that makes this impossible. And, if you do not think about it for more than 30 seconds, that sounds smart.
But -- what if the language software itself contains a memory-unsafe bug?
Then every piece of software written against that language and compiled by that flawed language software can be compromised now or in the future and, much worse, simply because the scope of the potential compromise now includes every piece of software written in that language the impact, if such a flaw exists, is exponentially larger.
Now you could say "well, but we'll make sure our top men write these compilers." Ok, fine and well enough. If the average computer language compiler produces, say, ten thousand programs that are then executed are those "top men" ten thousand times better than the average programmer who is hired to do work where secure programming is important?
I doubt it very much simply because I don't believe that the difference between "a good programmer" and the best there is reaches that multiplication factor. Ever.
When it comes to common languages that 10,000 multiple, incidentally, is laughably low.
Second, actually enforcing these features requires CPU cycles. Said tests must be performed all the time whether the programmer is skilled and does things properly or not. This overhead is not trivial; it both makes the software larger and costs more CPU cycles to execute.
So the second question becomes this: Would you like to take said overhead once, when the software is written, to make sure it in fact does not result in memory constraint violations, or would you like to take it every single time the software is used?
Nothing is free; paying once for quality work is always cheaper than paying every time you use something to insure that someone didn't do a stupid thing. Further, those integrity "assurances" presume the compiler and language itself do not contain any flaws, as discussed above. This you cannot actually obtain assurance of any more than you can assure any individual piece of software does not contain flaws.
And finally, if you're wrong exactly how many programs would you like to have compromised at once?
Why do I bring this up?
Because some of the most-ridiculous problems in this regard have been in the microcode for CPUs. Are not those programmers "top men" and by the way where is accountability for the flaws in said microcode in processors and the performance impact, which is quite-significant, that results in loss of value that is utterly enormous every time an update requirement is discovered that effectively ruins the performance of said CPUs that were sold to both individuals and businesses?
What was that you said about "memory safe" and "accountability" again?
And why is not the discussion instead about the quality of performance in coding applications that have security impacts -- such as systems holding financial, business and medical data?
If your police force cannot manage to shoot straight, and thus when attempting to apprehend a bank robber shoot innocent civilians that are in the general area the answer to the problem is not to issue bullet resistant vests to every citizen in town.