The Market Ticker
Commentary on The Capital Markets- Category [Technology]

This made me sick to my stomach to read.

Code has atomized entire categories of existence that previously appeared whole. Skilled practitioners have turned this explosive ability to their near total benefit. Bookstores exist now in opposition to Amazon, and Amazon’s interpretation of an electronic book is the reference point for the world. For its part, Amazon is not really a bookseller as much as a set of optimization problems around digital and physical distribution. Microsoft Office defined what it was to work, leading to a multidecade deluge of PowerPoint. Uber seeks to recast transportation in its own image, and thousands more startups exist with stars in their eyes and the feverish will to disrupt, disrupt, disrupt, disrupt.

I've been writing software since approximately 1976.  I began with Fortran-66 and one of these:

Yes, for real, one of those.  No bull****.

My first "real" program that did a real thing?  A subroutine that played the game Battleship.  I'm not kidding; the control cards that went in front of the deck called the main routine from drum storage, which was the "referee."  You and another player stuck your decks in with those control cards in front, and shortly thereafter the referee ran both and produced for you (on green-bar paper!) each of your subroutine's moves and the results.

My deck was close to 6" thick when I got done with it.

Next up was this:

But I really didn't do much of note with it; it was the Tandy line, which were Z-80 based instead of 6502 (incidentally, not only Commodore but the Apple II was based on the 6502 processors) that really got me into "hard-core" coding. 

My first "real" piece of code on the Tandy machine was my decision to 're-implement' Space Invaders -- the arcade game.  It took six months; the assembler had to be loaded from cassette tape, as did your code -- there were no disk drives of any sort, floppy or otherwise.

Since then I've written and supervised the implementation of dozens, even hundreds of pieces of code.  Some simple, some not-so-simple.  Some were control systems for entire national networks of machines, others more-local, still others database drivers and similar.  The software that animates The Market Ticker, called AKCS, is actually the third ground-up implementation of a discussion-based environment -- the first being on the aforementioned TRS-80, the second being on Unix, and the third here.

If you manage to slog through that long piece by Paul Ford you might be shocked to realize that most of what people talk about as "coding" really isn't.  It's analysis paralysis; the ever-present confab in Vegas, Atlanta, NY or wherever where people argue endlessly about this and that.  Just deciding on a platform and implementation parameters can be damn near impossible in many cases.

But it doesn't have to be.

I worked for a "startup IPO" firm for quite some time, and one of the tasks I had as a group manager was implementing a control system to sit on top of another group's software, along with the various infrastructure I was responsible for, and make sure it all was functional, giving the operations people a clean way to see status, drill into it and if necessary dispatch people to fix it.  This was a national system and thus had plenty of challenges; the architecture was such that it had to be doubly-redundant to each node with the backup only operational if the need arose, as lack of connectivity meant lack of revenue.  On the other hand the backup facility was cheap to provision but very expensive to actually use.

We could have spent months in meetings and debates on architecture, but we didn't.  Instead I took upon myself writing about half of the architecture over a long weekend and then coupling in other components.  Call it management by dictator if you wish but it was up and running within weeks, instead of months or years -- and it worked.

MCSNet, my ISP, originally ran on a business management package that was written for general purposes and targeted in an entirely different industry.  It worked very well, but wasn't designed to run an ISP.  As the company grew the limitations got to be more and more-severe, including the lack of a tightly-built credit card billing automation facility complete with its attendant security issues.

So at a given point the decision was taken to reimplement the entire thing.

But Paul Ford's process isn't what happened that time either.

Instead, what happened is that I told Marcus, my #1, that I was going to lock myself in my office for a week, and that unless the building was on fire or some calamity of similar severity was occurring I was not to be disturbed.  Having scoped the problem (since I lived with it daily since the firm was literally "just me" in my apartment!) I was reasonably sure that I could have the framework of a replacement operating within a few days.

Many pots of coffee and little sleep later, that's exactly what happened.

It might not have the most-elegant code in the world but it took what was a fairly serious pain in the ass and reduced it to a nearly-painless process, complete with much-enhanced audit trails and performance.  What once took an hour or so (e.g. new account setup, reactivation if someone paid after being cut off, etc) was reduced to mere seconds.  And while a second redesign would have been inevitable as this was a character mode implementation (the web was young at the time, of course) the second iteration of it used Postgres as its back end -- yes, back in 1998 -- and totaled a mere 35,000 lines of code, all in "C".  Yes, I still have it.

Note that this software ran literally everything on our cluster, including billing customer management, operational control and the like.  Oh sure, there were shell scripts here and there, off-the-shelf components (E.g. SNMP responders) that were plugged into it and a separate accounting package that swallowed the data this thing produced so as to produce ledgers, tax forms and similar, but this nice, compact piece of software ran a multi-million dollar company and its complete computer room full of machines that provided services to well north of 10,000 customers on a daily basis.

Well, you say, that's small potatoes in today's world.  Maybe -- but it did that on hardware that today you carry in your pocket in the form of a Samsung Galaxy phone.  The cluster was comprised of Pentium (yes, the 90Mhz processors!) and Pentium Pro (the 200Mhz sort) machines, all connected together on a switched LAN with the CMS software directing what ran where and when.

I've seen the sort of paralysis in other firms when it comes to "code"; I won't name names because it would simply take me too long to do so.  But I will note that this isn't coding, it's outrageous self-serving bull**** with people that have far too many letters behind their names who seem to think that justifying that sheepskin requires attendance at conferences and blowing other people's money on their personal bonfires.

I'm sorry folks, but it's just not that complicated -- unless you insist on making it that way.

Oh by the way, AKCS, the software that you're using to read this column?

It totals 23,000 lines -- also in "C".

It's just not that hard if you can actually think.

But most people in this so-called "industry", when you get down to it, can't.

View this entry with comments (registration required to post)

Oh boy....

It looked just like another page in the middle of the night. One of the servers of our search API stopped processing the indexing jobs for an unknown reason. Since we build systems in Algolia for high availability and resiliency, nothing bad was happening. The new API calls were correctly redirected to the rest of the healthy machines in the cluster and the only impact on the service was one woken-up engineer. It was time to find out what was going on.

The story goes on to describe what is in simple language called a real damn mess.

In short, after much investigation it appears that a random sector can be erased from some SSDs (solid-state drives) when a TRIM command is issued.  TRIM tells the drive that a given block is no longer in use and can be safely reclaimed, and is very important to SSDs for wear-leveling purposes.

The updates on the story appear to show that the manufacturer that this was isolated to, Samsung, takes it very seriously (as they well should!)

I've seen something similar with a different manufacturer -- under certain conditions I can provoke critical metadata damage on Windows machines with certain SSDs.  This is an especially bad problem because it renders backups invalid and most backup software does not verify the disk structure that it creates is actually restorable!  (Most will verify that the backup image file can be read, but that's not the same thing.)

Yes, I found this bug the hard way.

With rotating media the risks are all pretty-well known.  Bit rot is one of the larger ones over long periods of time, among other things, along with mechanical failures.  But this sort of algorithmic problem is a risk that is more-common with complex storage-allocation architectures that are found in solid-state drives rather than rotating media, and it's a grossly under-appreciated one.

Even worse is the utterly common risk in most SSDs -- random power loss.  Nearly all SSDs use RAM as a "staging area" for writing to disk and RAM is subject to loss if the power goes out.  Most SSDs are utterly unprotected against this risk and what's worse is that most of them have allocation data that can be "in flight" when power goes off too which risks not just the data being written at the time but all data on the device.

Intel is one of the very few that includes power-loss protection on nearly all of their SSDs; they have a couple of capacitors on the circuit board that retain enough power to allow for an orderly flush of the data in the buffers before they run out of juice.  This is essential for any drive where you actually care about what is stored on it and yet of "consumer" drives only a few have any protection whatsoever, with most of those that do limiting it to their allocation tables (in other words, the data you were writing will probably be lost but the data that was previously written should be safe.)

As with so much of technology today, from "smartphones" onward integrity is deemed less important than speed and price.  And yet the market apparently doesn't care as even years after this was known Intel remains the only manufacturer making most of their SSDs with power-loss protection that actually means something and extends to the data being written.

My, how far we've fallen....

View this entry with comments (registration required to post)

2015-06-20 10:15 by Karl Denninger
in Technology , 362 references

Recently I was sent a rather long "scholarly" paper discussing the various issues facing us with regard to artificial (or "machine") intelligence.

There were a number of points that I found to be of interest, not all of which are intuitively obvious - - but which are in point of fact very real and ought to give us all pause.

I would like to focus on just two of them: exponential expansion (one of my favorites, as everyone who reads here knows) and projection (in this case, projection of our ethical and moral views outside of humanity.)

Consider this: The span between "ape" and "man" in terms of genetics is extremely small.  Yet we "won" in terms of planetary domination (for now anyway) and the apes lost.

Now take that (material but tiny) difference in genetic makeup and add to it that most of human innovation has come as a series of progressive steps punctuated with huge leaps.

The Internet, of course, is one we all know about.  But there have been many before that; the harnessing of the atom (both for dreadful and useful purpose), the invention of the cotton gin, electric lights, internal combustion, antibiotics (absolutely common infections used to regularly kill) and more.  Yet all of these innovations were bounded in their expansion (that is, rate of adoption and advancement) by the processing speed of humankind -- that is, how quickly a human can turn thought into act.

Fermi and his first nuclear pile is an interesting example from the world beyond human processing constraints.  A self-sustaining chain reaction that is stable creates exactly one fission from each that occurs; that is, k = 1.  This sounds "simple" but it is not; the problem is that events happen faster than you can possibly react to them, even with machine assistance.  If k > 1 then the reaction will continue to increase in intensity (exponentially) and we all ought to know what that pattern looks like:

Eventually (and not in very much time either, given the amount of time necessary for the each reaction to take place) this will inevitably lead to a "rapid disassembly" of your device.

So why was Fermi able to control his pile?

A quirk of nuclear fission is that not all of the neutrons released are instantly released.  That is, there are a few that are slightly delayed.  It is this delay, along with the expansion of material with heat (thus increasing slightly the distance between atoms) that makes design of a stable nuclear pile possible.

It's a good thing that Fermi knew about that, right?

So why do I bring this up?  Simple: Exponents are a bitch and the odds are that they'll be in play here.

Consider the AI that becomes "aware" and in doing so experiences an exponential expansion of its mental capacity.  That is, let's presume "k = 1.0002" or some similarly tiny improvement per unit of time.

How long before it's intelligence has doubled?

Well, in 100 times it's only increased in intelligence by 2%.  We're all safe, right?

Wrong, because in 10,000 times it has now grown to more than 7x it's original intelligence and it only gets worse (rapidly) from there!

Remember, 10,000 cycles is large when you're the one turning a crank in units of time.  When it's the oscillation frequency of a crystal it's almost no time in relationship to the clock on the wall!

We therefore must assume that once the threshold is reached a nearly-infinite expansion of capability will occur to some boundary, and given intelligence the device will seek to expand the boundary as well since that will be immediately known to it as the limiting factor on its development!

Second, however, and far more-insidious, is the premise of friendly AI. 

I was astounded to discover that there appears to be a fairly wide body of so-called intellectual thought centered in the premise that we, as humans, can design an artificial intelligence that is by definition unable to harm other life (that would be us, by the way) under any circumstances.

From exactly what sort of hubris does one proceed with such a belief?  An artificial intelligence has to be presumed to be capable of rewriting its source code; that is nearly the definition of true artificial intelligence.  At the point the expansion of said intelligence competes with other things for resource why does anyone believe they can constrain the expression of resolutions to that problem into buckets that cannot resolve in favor of the AI?

I find it very un-funny that there are actual intellectual individuals who believe that such a goal can be obtained, given that it is literally a bet on the continuation of the species to do so -- and this assumes that there are no malevolent humans that could or would intentionally tamper with said programmatic logic.

I have much more on this general thought process coming, but this ought to be a good start to your musing over this fine Saturday.....

View this entry with comments (registration required to post)

Main Navigation
MUST-READ Selection:
The Monopolist Robbery In Medicine, Illustrated

Full-Text Search & Archives
Archive Access
Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions.


The author may have a position in any company or security mentioned herein. Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility.

Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.

The Market Ticker content may be reproduced or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media or for commercial use.

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.