Here we are again.
Once again an analyst asked the key question on the conference call: Where are the revenue applications from customers (not you selling chips, but the customers selling goods and services), when will they appear, and how do you know they're economic?
It wasn't said in exactly those words, but that was the question.
He didn't get a straight answer -- he got 10 minutes of word salad which amounted to "every business will have to have and use this."
Really?
Let's clear up some facts.
First, computers do not "think." We don't know how humans think and thus we cannot build a computer to do that which we cannot describe. Every one of us engages in what we see as a routine and trivial undertaking almost every day: We drive a car. It does not require kilowatts or even megawatts of power to do this; in fact if you're reasonably good at it and have a Garmin you can drive eight hours in your car and your bodily stress level will never leave the "resting" state. I do it all the time.
This is a pretty-extreme problem for a computer, however. Tesla claims somewhere between 150 and 300 or so miles per "critical disengagement" which we can reasonably presume means if you didn't take control back you'd bend the car (and probably something else not owned by you) or worse. That's a nastily-bad record for what humans think of as a trivial task -- clearly, for a computer, it is not.
But what computers are very good at is pattern analysis. That is, mathematics. Thus the current rage is all about applying mathematical "models" to data in order to try to answer questions. This is done by "weighing" (applying mathematical values) to some set of data and attempting to produce a result. But unlike a spreadsheet where 2 + 2 = 4 each and every time, the data set is extremely vast and precisely how a human would arrive at their answer is not so clear. What we do know for certain, however, is that said machine is not mimicking the process that a human brain uses to evaluate a question.
It is quite easy, for example, to calculate the moment arms that exert force on a thing, what the load is at some point of that thing (e.g. over a fulcrum or other point of impingement) and then figure out what the material qualities are that you need to be able to withstand that without damage. That sort of analysis has an exact physical answer and we do it all the time. The same is true with CAD/CAM; you have your 3-D model and with it you can "trial" the assembly to make sure (1) the parts will go together in some order such that you can in fact assemble them and (2) the assembly, when complete, will take the loads designed -- provided you can describe them mathematically -- without failing. You can then take that set of files and send it to a robot that will produce the parts with a given set of tolerances.
But that is not AI -- its computational science.
Where is the line between discernment of a deterministic process that has one correct answer and creation, which often does not have only one answer? That is, how does a machine weigh the intangibles and what error rate, as determined by the humans involved, occurs?
Given a business policy can a machine take input and determine an outcome that has fewer errors than a human?
Maybe; that depends on what we want the machine to do. What we do know is that the machine is faster -- for example, it can handle 100 calls with voice recognition and response at once, where a human can only handle one or perhaps two at once. This means it can be 50 to 100 times as efficient but that is only true if its error rate is equal or better than the human error rate.
If its not it might do 50 or 100 times as much damage as the human does when an error is committed simply because its 50 or 100 times as fast!
Now are there tasks which today humans do but machines can do better and faster -- and at lower cost? Sure. Take the ordinary "voice response" system; I put one in. Why? Because finding a human that would show up reliably and answer the frapping phone and direct calls to a department based on what the customer wanted proved nearly impossible so I spent the money. Today that same task (which was quite expensive then) can literally be done on a $25 computer the size of a pack of cigarettes! But as the complexity of the task goes up for the computer the price goes up exponentially.
Further, contemplate the premise of the "new hardware" Nvidia says is coming online. Let's assume its faster, cheaper per-unit of processing and uses less power. That's how computers typically work out, right?
Ok, so how about the billions of dollars customers spent on the last version?
You see, those are no longer viable on a revenue generating basis: They cost more to operate than the new ones and produce less -- which means if your competitor has the new one and you don't you get buried. This is always the problem with emerging technologies: If you can't produce revenue with it right now, and so far nobody is in this space, when evolution in capacity occurs your "investment" becomes competitively worthless because your competitor who bought the new version has a lower cost of operation than you do, always has a lower capital cost of acquisition per unit of output the device can produce (or he'd never buy the new offering in the first place) and you've not defrayed any of your capital expense by producing revenue during the interim period!
Of course the markets have gone ahead and added a huge amount of valuation to this company despite these facts. Never in the history of computing, however, has the above not been, in every case, the correct set of facts. For forty years I've witnessed this occur countless times -- in CPU, RAM, storage, data transport (e.g. Internet connections) and more. You've personally seen it yourself countless times -- in personal computers, laptops, cellphones, televisions (tube, 3-tube HD behemoths, DLP, LCD, OLED, 2k, 4k, etc.) video disc players (LaserDisc to DVD to BluRay to 4k) and more. There has never been an exception across the entire modern electronics and computer age and in fact what has both made and destroyed every computer and electronics-related firm ever has been exactly that progression when the firm's cost of revenue gets undercut by a competitor who can deliver the same or better both faster and cheaper.
I know, I know: This time its different.
No it isn't.
Nvidia makes very fine graphics cards that serve people's needs for years. But that your competitor can render out a movie in half the time doesn't mean you can't make the release date in the theater or on Netflix and so you don't have to throw away the one-year old card.
In a competitive market where you're all trying to put out "AI" services and the other guy has bought the faster and cheaper one, never mind if the new one consumes half the electricity to produce the same output your billions of "investment" get turned into smoke every time new silicon shows up because in this case time is the entire point and to beat him you need twice the resource which cost twice the money and worse you must use twice the power on a recurring basis which means your cost of delivery is much higher than your competitor's and he takes all the business away from you. If the output quality improves with density the same thing happens; your "investment" in the former generation again gets turned into smoke.
This is inherent in all emerging technologies but unlike in the 1990s when I was selling Internet service today there's no revenue coming in on the current-generation devices. We were getting revenue in by the truckload -- the trick in the 1990s was to make enough to pay for the gear before it had to be replaced lest your competitor hit you over the head with the newer hardware (and if he was writing it more-quickly than you could, software too) that was better and faster, burying you. There was a roughly 18-month "turn cycle" at the time and it was brutal; one serious mistake and you were probably finished and as such I spent a huge amount of my time so as to avoid said mistakes -- and there were several close calls.
But during all of that revenue was coming in -- and today it isn't. Yes people are "buying" the stuff, although one has to wonder -- how much of this is vendor financed rather than paid for in cash, and how much of that is at risk given the terms and, if the customer blows up what are the odds of meaningful recovery of anything close to the invoiced amount? Repossessing used gear in a fast-moving technological field may as well be taking ownership of a warm bucket of spit as Lucent discovered when Winstar and others got turned into smoking craters during the tech wreck.
The key question I heard no meaningful answer to -- and this is now on two earnings calls in a row -- is this:
Where is the customer's revenue stream, what's it look like on a coverage basis among your customers on a consolidated basis (that is, how long does it take to pay for your stuff, plus the housing and operating costs, given the customer revenue), how does a new cycle of devices impinge on that from a competitive market point of view on their (your customer's) end, how much visibility do you have into all of that if any and what reason do you have to believe that said revenue from end users HAS AND WILL cover said acquisition and operating cost AT THE CUSTOMER'S END?
If I sell a lot of Beanie Babies it's great so long as people want to buy them -- but how do I justify a stock priced at forty times sales unless I can demonstrate that the people buying my stuff can turn a profit by doing so and that their investment in my products -- the entire reason my stock isn't selling for five bucks a share -- doesn't get reduced to smoke six months from now when the next generation of chips starts to ship?