Android runs Java applications on a JVM called Dalvik, which runs on a Linux kernel. As it's open source, Dalvik was straightforward to port to QNX, the sophisticated embedded Unix that RIM acquired in 2010, and which powered its PlayBook tablet (released the same year).
RIM promised that this Android Player would also appear on its first QNX-based phones. But not all apps could run, and there was an insurmountable stumbling block in the way. Android apps may also call native extensions, which are ARM Linux binary libraries. And there was no way of running these on the phones - so the apps couldn’t run either.
Well, that's not quite true... but let's keep going.
While Linux and QNX are “Unix like”, that hardly helped. The Linux extensions looked like “binary blobs”, so the RIM engineers couldn’t be sure what was code and what was data. Which meant they couldn’t inspect and patch the Linux libraries on the fly, something called opcode substitution. It also ruled out pre-processing.
"We had to let the SWIs trigger live and discern whether it came from a Linux binary or a QNX binary at runtime, without sacrificing performance of QNX code," a source familiar with the work told us....
Now this sounds like a breakthrough. But it's not.
In fact exactly the same techniques were used when WINE was first introduced.
What is WINE? An emulator for Windows applications that runs on various Unix machines -- and dates to the early 1990s.
In short it did exactly what BlackBerry does -- it trapped the runtime calls from Windows applications and remapped them to X-Windows, a completely different windowing environment.
It worked too, for a large number of applications. Not all of them, to be sure, but many.
So despite what The Register claims, there was no real new ground broken here. This technique -- intercepting a trap at the time of execution and looking at what it is, determining whether it needs to be mapped to some other native function or not in real time, is not a new concept. It dates back a literal 20 years and as one of the people involved in the industry at the time I chuckle at both the debates over whether such is "legitimate" (Microsoft tried to come after the Wabi and Wine people -- and failed) and also whether it will "work" (of course it will and does.)
But now let's add to this, because there is a new "debate" brewing with regard to Google and Android -- that of ART .vs. Dalvik.
Back up to the beginning of Android. One of the criticisms that I and others had is that Java is, quite frankly, a piece of crap when it comes to both performance and capability. It's fine for some things but for others it is simply inappropriate. The overhead is material, even with the implementation of an object cache (which Dalvik has.)
Eventually developers got*****ed off at the limitations and Google was basically forced to allow the inclusion of native embedded code elements in applications. This sort of resolved the performance problems, but it led to a new one -- now an application was tied to a particular processor family!
That was the exact situation Google had intended to avoid up front when Android was released.
When this first happened it wasn't that big of a deal however because virtually every mobile device used an ARM processor. But innovation doesn't care about your particular little political and business game, and eventually it overtakes you. In this case that overtaking has come from Intel, which is starting to make inroads in mobile device CPUs.
That spelled disaster for Google and Android, because a consumer might buy a device running Android that had an Intel chip in it and then discover that a huge percentage of applications would not run on it as they included native ARM machine code!
That problem can't be reasonably resolved with trap-and-rewrite as the instruction sets are not compatible.
The immediate fix came in the form of Google changing the development toolkit to compile both native intel and arm code. But that sucks too for a whole host of reasons -- who says that Intel will be the only entrant in this race down the road -- and in addition it meant that until and unless all developers using native code recompiled and resubmitted their apps they wouldn't run on the Intel CPU devices.
Google has now introduced (in "beta" in KitKat) what they're calling "ART." ART is billed as being included for performance improvement but IMHO that's a beneficial side effect. Essentially ART is putting the equivalent of a compiler and linker on the device then compiling what amounts to source code for the application to device-specific object code at the time of installation.
This does resolve the problem in that now you are fully device-independent, provided you follow the rules (e.g. integer and pointer declarations being size-independent, paying attention to endian issues and similar.) It is effectively the same thing as putting a Unix machine's "cc" and "ld" programs on the device, then causing them to be run when an application is installed.
But in point of fact as ART is adopted it will make the emulation of Android easier rather than harder on other devices, because now you have what amounts to source code in the application that is compiled at installation.
So while wide adoption of ART will improve performance (a lot) it will also make the movement to devices that are not "official" Android but can run Android apps easier.
In any event the amusing part of this story is how many people refused to look at history a couple of years ago when so-called "technology wonks and analysts" were talking about Android compatibility and whether BlackBerry could pull it off. Those who argued that this was "impossible" or some sort of magical wizardly conveniently forgot that this particular software engineering trick was performed some 20 years ago -- and that provided you have an instruction set that is reasonably compatible it isn't particularly difficult to do.
Indeed BlackBerry's BB10 has run native code Android apps for a hell of a lot longer than 10.2.1. The company, however, locked that behind a wall; to install a native-code application you had to generate and install a debugging token on your device. Whether this was due to not trusting the quality of the implementation prior to 10.2.1 or was a business decision to try to "entice" (force?) app developers to submit via their app store is something I can't answer, but the fact of the matter is that I've been running native-code APKs on my BB10 handset almost literally since the first day I owned it.
In the meantime in the reality of today QNX brings efficiency that Dalvik-on-Linux does not, and as a result BB10 devices often meet or exceed the performance of a "native" Android device running the same app even though BB10 is trapping and redirecting system calls in real time. BlackBerry's "opening" of this capability to everyone without the need to generate signing keys and a debug token is simply the dropping of what was an artificial wall they had previously erected.
The lack of understanding of all of this by media such as The Register and their attempt to trumpet this as an "exclusive", along with the utter ignorance displayed by the various so-called "analysts" points out what I've argued for years, going all the way back to the 1990s -- you should not listen to most of the Wall Street and other so-called "analysts" and "reporters" when it comes to technology matters as most of them don't know what the hell they're talking about.
Perhaps what Google ought to do is buy BlackBerry and offer QNX (BB10) as an alternative licensed (as opposed to "free") base operating system for manufacturers.
Or you could just buy a BB10 device if you want to run Android apps.
After all, the bottom line here is, like most other technology choices, found here:
Better, faster, cheaper: Pick any two.
NEWS The American intelligence service - NSA - infected more than 50,000 computer networks worldwide with malicious software designed to steal sensitive information. Documents provided by former NSA-employee Edward Snowden and seen by this newspaper, prove this.
A management presentation dating from 2012 explains how the NSA collects information worldwide. In addition, the presentation shows that the intelligence service uses ‘Computer Network Exploitation’ (CNE) in more than 50,000 locations. CNE is the secret infiltration of computer systems achieved by installing malware, malicious software.
I'm not sure what's more-disturbing -- that the NSA engaged in such a wide-scale infiltration or that they (mostly) got away with it for so long.
The latter says as much about the so-called "security" of the targets as anything else, and it ought to frighten you enough to harden and monitor your corporate network immediately, including without exception the immediate abandonment of all outside "cloud resources" from companies such as Amazon and Salesforce.
If you don't know how to both harden and monitor said network you need professional assistance and you need it right now. It has to be in-house, controlled by you and accountable only to you.
For many years now I have watched amusedly as the Russians (and their "splinter" nations that used to be part of the USSR) hacking contingent and then the Chinese flailed away at various corporate and government networks, stealing anything that wasn't nailed down and some stuff that was. This continues to the present day, incidentally -- my overnight logs (and I'm not exactly a "high value" target!) typically show hundreds of attempts against the security infrastructure here.
The problem I have with this article and what it documents is that it shows quite-clearly that not only did the NSA get in but the break-in and its exploitation went undetected. The former happens due to sloppiness or active cooperation of some sort. The latter only happens if you're as dumb as a box of rocks or trust someone who is -- sometimes because they're paid to be dumb.
Let's take Google, for example. Who thought that leasing a dark fiber somehow made it impervious to being picked off? The technological know-how in terms of doing that has been known almost-literally forever. So why would anyone ever run an unencrypted link over a cable that exits your fully-protected sphere of physical control if you are asserting that anything on there is "safe" or "secure"? The only rational reason is that you don't give a damn. The other reasons are worse (e.g. active cooperation with people who are interested in stealing whatever travels on said cable.)
10 or 20 years ago there was probably an argument for this approach, since encryption was expensive in terms of CPU (and thus money) -- you had to balance out whether encryption was worth the cost. This was definitely true when I designed private IP-based networks for people back in those days -- while it was possible to encrypt the price was astronomical to do so on high-speed links, and thus nobody did unless you were carrying high-level state secrets.
But today such an excuse rings hollow with the inclusion of AES-NI instructions in common commodity Intel processors, as just one example. While data transport speeds have grown dramatically the price-per-computation has come down at a close-to-exponential decay rate. I can now run my laptop disks in encrypted mode with a modest performance hit for this very reason -- never mind that many drives now include encryption on the chipset in the drive itself.
Now those might be "back doored" but the fact of the matter is that the NSA proves again and again that it's not breaking in by brute force or "back dooring" someone's equipment -- it is simply exploiting the fact that most people are stupid.
But back to my point on the NSA and these 50,000 networks -- breaking in is one thing. Actually getting the data you steal back out to the NSA is another, as that generates a flow that, if you're paying attention, can be trivially detected and as soon as it is the entity doing the spying is busted.
So why haven't they been busted?
There is only one explanation when it comes to those who would not have given consent -- they're incompetent.
Here's looking at all the various suppliers of this technology to various business and government interests, kid.
Now about buying that cloud computing and network resource (and all the companies selling same, especially the public ones.....)
BlackBerry finally has BBM out for both Android and IOS, and the "wait list" line has gone away.
The original introduction wasn't the benchmark. The line disappearing, however, is.
The detractors will say that there are already other highly-penetrated options such as WhatsApp. But they've got problems, chief among them being that they want your email address or (worse!) your phone number as a key, and they expose them to your friends because that's how people "find" you as the "key" they use for searches.
BBM doesn't do this -- it assigns a "PIN" number which is yours, but is not linked to your device or anything usable by anyone for any purpose other than BBM. This also means you can change what you want to call yourself (e.g. your "name") with impunity.
This is an important distinction -- if someone who you believe is your "friend" starts spamming you or worse, engages in some other sort of reprehensible conduct (such as cyberstalking) you can ban them from contacting or sending to you on BBM and doing so does not leave them with the ability to harass you by other means.
You will never understand how important this is until it happens to you. If you talk with people who it has happened to you'll understand immediately -- being effectively forced to change your phone number (and to a lesser extent your email address) because someone is being a jackass is a fairly serious problem. While spam blocking can handle the email address issue it's tougher when it's your phone number that got exposed.
Some commentators (The "Verge" being one) have claimed that it's "tough" to add your friends. Really? If your friends are really your friends (e.g. you are sitting in a bar with them or otherwise seeing them in "meatspace") it's dirt trivial -- bring up BBM and either put your phones together (if both have NFC) and they will add one another, or if one of you doesn't have NFC (cough-iPhoney-cough!) click the button to get the QRC code (the box "bar code"), point your phone's camera at the code and bingo -- done. You can also text or otherwise send your friends your PIN of course, but if you're in the same place at any given point there's no need to do anything of the sort.
Spammers won't like this, of course, because (1) they're not with you in real life and (2) you actually have to confirm association with them. That is, I can't add you and you can't add me unless we both consent. This is a good thing, not a bad one, and to demonstrate that I'm not skeered of the spammers my barcode is found at the end of this post.
The other distinction on BBM that I really like (and now am glad I have available to those not on BlackBerry devices!) is that unlike a text message you inherently get both delivery and read confirmation.
Android phones (and most others) have had "delivery" confirmation for a long time for text messages, but it's always been klunky for both text and picture messages -- to the point that I shut it off because the toast notifications got so damned annoying they weren't worth keeping enabled. On BB10 at least this becomes part of the original message (the "check mark" changes to a "D" if you turn on the option) but it doesn't tell you whether the person saw the message.
Who among us hasn't sent a text message, saw our phone claim it was sent and the other person then claims they never got -- or saw -- it? It happens all the time and that sucks.
BBM inherently changes the message status to show that it's being sent, was sent (checkmark), was delivered (D) and finally was seen. (R)
BBM also supports user-defined groups and contact group lists that you can assign people to, which some other packages do (but again, "bare" SMS and MMS do not -- at least not easily) and a "channel" function is in beta (available only on BlackBerry devices -- thus far.) The group capability is quite nice as it allows each user to set up their own private group communication channels that can have a number of administrators that have control over who is in the group, picture display and storage and whether ordinary users of the group can add new users or only an administrator can invite them. "Channels" are like groups but intended for broadcast distribution (and are where BlackBerry likely intends to try to monetize the offering in the future.)
Power consumption for BBM is also extremely light, unlike many other "group" and "chat" apps. In particular both Skype and Groupme have been known to hammer your battery if left active and able to alert you to things, while shutting that off destroys the entire reason you have the application in the first place. BBM has a negligible-to-nil impact on your phone's battery life, and IMHO power budget is everything in a mobile device -- and app.
Finally, on BlackBerry devices now but promised for the next update for Android and IOS are the capability to not only handle text and pictures (which everyone wants and has) but also voice and/or video calls. The cute part of the voice calling system is that the quality is insanely good -- better than an actual cell call in most cases -- unlike Skype which does work but often has horrible quality.
As someone who came to BlackBerry from Android I was struck immediately by how powerful -- and nice -- BBM was when I first got my Z10. But the lack of cross-platform availability meant that only BlackBerry users could talk to me using it. Now that has changed and what is arguably the nicest messaging and personal communication service available on smartphones and similar devices, formerly a BlackBerry exclusive, is now open to both Android and IOS users as well.
I didn't know if the cross-platform availability would go over well with users or not, but Android's Play Store now shows somewhere between 10 and 50 million installs -- in less than a week (Play Store only gives ranges, not precise counts.)
I'd say that's damned fast uptake.
For those who do actually know me -- no matter what your platform -- point BBM's "add by barcode" camera capture at this:
Until now, Google hasn’t talked about malware on Android because it did not have the data or analytic platform to back its security claims. But that changed dramatically today when Google’s Android Security chief Adrian Ludwig reported data showing that less than an estimated 0.001% of app installations on Android are able to evade the system’s multi-layered defenses and cause harm to users. Android, built on an open innovation model, has quietly resisted the locked down, total control model spawned by decades of Windows malware. Ludwig spoke today at the Virus Bulletin conference in Berlin because he has the data to dispute the claims of pervasive Android malware threats.
Like, for instance, this?
That's a real attempt from a real app in the real Play Store that really did ask for my permission to, well, do anything. And it's recent too -- just a couple of weeks back.
No, I didn't say "yes."
And saying "yes" is giving permission to breach many of the so-called "walls" that are cited in that link -- with your permission, not by some trick-of-the-code that "breaks in."
I might have said yes, incidentally, if I hadn't been damned suspicious because I knew the app was bull****. It was in fact a claimed BBM client for Android, which was in the Play Store (and which reappeared multiple times despite being deleted multiple times, presumably by Google) while BlackBerry was recently in the process of attempting to roll out it's actual and real client.
There were, in fact many such fake clients on Play Store:
Now I didn't get ****ed, but I didn't say "Yes." And as such I'm not really sure what was in that app. I have the tools here to deconstruct an application and look at it, but in order to do that I have to load it first, and that means I had to say "Yes."
Since one of the permissions the app wanted was to be able to autostart, and another was to be able to edit shortcuts, there was no guarantee that it wouldn't immediately rename and try to hide itself.
In other words once I said "Yes" I was committed to firmware flashing the device in question to be sure I really had deleted whatever I downloaded, which happens to be my Android tablet -- a device I was unwilling to intentionally brick (and re-flash via ODIN) for this purpose.
Even a hard reset is not necessarily good enough, if the app in question has figured out how to break out of the sandbox -- and it might have. In that case it could conceivable mount the system partition for write, and if that happens even a "hard reset" won't get rid of it.
In other words if you downloaded something like this and let it install the only way to know you're safe is to overwrite all of the flash memory in the device, effectively "reformatting" it.
For Android devices this is not typically possible for the ordinary, non-techie-oriented user as a "hard reset" only erases the user data partition -- it does not overwrite the system partition. If anything has managed to modify that partition (for good or bad) it remains as it was even after a hard reset.
Google may claim "innovation" in this regard, but before you buy that claim I direct you to the install screen where it asked for these permissions. Note the "Hide" tab.
I expanded that -- by default none of those permissions under the "Hide" tab were visible to me, and those are the nasty ones.
In fact, for a messenger client the permissions that are above the "Hide" tab look pretty reasonable. Only the last one might be unreasonable -- being able to look at and change browser information. But then again maybe not: In this integrated world where being able to bookmark things looks pretty reasonable on-balance to most people.
So why is there that second, "must press to expand" category? Why is it not true that when you try to install an application that wants any of those bottom permissions you're not warned quite-explicitly that the application is asking for account information for other accounts you have signed into, access to protected storage, and access to make itself persistent, uninstall and change the apparent installed apps on the device and close other applications potentially including any security software you may have on the unit?
Hiding that by default and expecting people to push a button to see it rather than screaming when an application wants any of these things is so "proactive" when it comes to security...... right?
That's kind of like claiming that Chrome is "very secure" while not telling you if perfect forward secrecy is enabled unless you go look specifically (and know how) -- while most "secure" web servers around the net have it disabled on purpose.
Oh wait, that's not a contrived example either -- Chrome (and the rest of the browsers!) do that too.
I'd take Google's claims a lot more seriously in this regard if any attempt to install using those "below the line" permissions popped up a very explicit warning for each, and required your specific approval for each, without you having to "drill down" to even see that they were requested.
But as things stand today that's not how Android works.
As noted in the first comment, let me guess -- Android's people would not call that app "malware" as it actually asked for all the permissions it got and then used -- right?
You, of course, might see it a bit differently.
Just because this is a big-ass block of salt that should be rubbed into the eyes of the banksters and other commercial sites.
There are ads displayed on the top page of Tickerforum, which is why you get the "there are other resources that are not secure" warning. But note the bottom block -- ECDHE_RSA is a Perfect Forward Secrecy negotiated key exchange.
While this is not easily-decipherable by the common man what it means is that if you steal the private key you cannot retrospectively decode any saved encrypted session using it. You can step in the middle of the transmission and intercept what I do in the future, but you are NOT helped with previous transmissions you might have recorded in encrypted form.
Now let's look at a few other sites around the net... including some important ones.
Do you bank with Bank of Scamerica? Oh good, because any so-called "encrypted" sessions with them can be retrospectively decrypted if their private key is compromised. And we know the NSA has never "asked" for their private key because nobody has ever done a bad thing using Bank of America, right?
Do you believe in Santa Claus?
Schwab is even "better"! Not only does Schwab not allow PFS and thus any session with them can be retrospectively decrypted but they use MD5 for message authentication -- which is quite-breakable. Thanks a lot guys.
How about one of the favorites, Interactive Brokers?
Thank you for trading with the NSA; your options and futures positions have been dutifully recorded and will be retrospectively decrypted whenever we feel like it.
How about Fidelity?
**** you very little, Fidelity.
Jamie Dimon is also happy to present your so-called "encrypted" data in a form and fashion that allows perfect retrospective decryption.
And again I'm absolutely certain that JP Morgan/Chase has never had a "bad guy" use their services, and thus they would never have been "asked" for their secret key.
How about something more-mundane? Microsoft's Office365. After all, the cloud is a good place to store all your secure business documents, like, for instance, legal documents -- right?
All retrospectively decryptable, provided the NSA has the secret key. Once again, you believe that there has never been a bad guy on Microsoft's "secure" services that would prompt a demand for a secret key (and that Microsoft would refuse to turn it over if "asked"), right? Never?
Do you use Bitcon to transact? Might you use Mtgox? Everything you do -- or ever did -- through them is retrospectively decryptable. Thank you very little, *******s.
There are some good guys though among "large" commercial sites. One of them is Twitter.
I'll be damned.
Finally, you know the government would always protect your data, right? Especially when you're signing up for health insurance under Obamacare. And particularly given that this is a brand new thing, and thus the old "we always did it the other way" isn't an excuse!
**** you Mr. President.
And here's a giant "**** You" to all the browser writers. Firefox, Internet Explorer and Safari all make it nearly impossible to determine if PFS is enabled or not. Opera and Chrome expose the data but do not explain what it means.
None of the browser writers do something like turn the SSL box "green" as they do for so-called "Extended Validation" certificates, which allegedly is "more secure" than a "standard" certificate.
Yet whether or not the connection has PFS enabled is far more important in terms of "hackability" than whether or not a site has an "Extended Validation" certificate!
Again, if you're not using a "prefixed" key exchange mechanism any encrypted session that is stored can be retrospectively decrypted at any time in the future should the secret key be compromised from the remote end, and that is never under your control.
Further, when these jackasses come knocking at some web server's site (or the owner of a "cloud" service you bought space on) 99.9% of the time the target will be served with a gag order and most targets will neither fight or do anything to tell you about it or mitigate these risks (such as changing the secret key.)
Finally, the odds of any large business having never had a "bad guy" on their network that might "attract" such a demand are zero.
In today's world where CPU cycles are quite cheap there is utterly no excuse, other than intentionally exposing your traffic to retrospective decryption, for a site to not enable PFS.
Where We Are, Where We're Heading (2013) - The annual 2013 Ticker
The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions.
NO MATERIAL HERE CONSTITUTES "INVESTMENT ADVICE" NOR IS IT A RECOMMENDATION TO BUY OR SELL ANY FINANCIAL INSTRUMENT, INCLUDING BUT NOT LIMITED TO STOCKS, OPTIONS, BONDS OR FUTURES.
The author may have a position in any company or security mentioned herein. Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility.
Looking for "The Best of Market Ticker"? Check out Ticker Classics.
Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.
The Market Ticker content may be reproduced or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media or for commercial use.
Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.