The Market Ticker
Commentary on The Capital Markets - Category [Technology]
Login or register to improve your experience
Main Navigation
Sarah's Resources You Should See
Full-Text Search & Archives
Leverage, the book
Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions. For investment, legal or other professional advice specific to your situation contact a licensed professional in your jurisdiction.

NO MATERIAL HERE CONSTITUTES "INVESTMENT ADVICE" NOR IS IT A RECOMMENDATION TO BUY OR SELL ANY FINANCIAL INSTRUMENT, INCLUDING BUT NOT LIMITED TO STOCKS, OPTIONS, BONDS OR FUTURES.

Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility; author(s) may have positions in securities or firms mentioned and have no duty to disclose same.

The Market Ticker content may be sent unmodified to lawmakers via print or electronic means or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media, to republish full articles, or for any commercial use (which includes any site where advertising is displayed.)

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must be complete (NOT a "pitch"; those get you blocked as a spammer), include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.

Considering sending spam? Read this first.

2024-06-23 08:21 by Karl Denninger
in Technology , 307 references
[Comments enabled]  

We keep hearing about these sorts of "security breaches" and "ransomware attacks" like the recent one at CDK that is playing Hell with car dealers.

As a professional in the IT business -- in fact, that's basically my entire professional life in a nutshell; writing code to run various pieces of hardware (in some cases) and enterprise tasks (in others), being the CEO of an ISP that literally was started in my bedroom closet (MCSNet) with a single computer and a $20 box fan to keep it and the modems connected to it from overheating and more I have questions.

You see, the primary job of IT, above and beyond everything else, is privilege separation and enforcement, along with guaranteeing data integrity.

Users are, well, users.  They have different requirements, desires and levels of knowledge.  Many people know very little of the internals of how a computer works; they use it to perform a task, whether its using a spreadsheet, entering financial transactions into a ledger and cutting checks for Accounts Payable or similar.

The number of people across a particular computing domain that have elevated privilege capability -- that is, the capacity to impact other than their own data in scope, should ideally be zero.  Of course it never is actually zero because there are tasks that must take place and that operation requires elevated privileges.  An example is backing up the data to local or offsite storage, the latter being essential because natural and man-made disasters (e.g. fire, earthquakes, tornadoes, sabotage, etc.) do occur and, as noted in the third paragraph, the primary job of IT includes guaranteeing data integrity even in the face of disaster.

So some number greater than zero must in fact have elevated capability -- that is, someone has to be able to initiate an action that performs that copying and the essence of that act requires that access be made to someone else's information.

The original Windows 95 had no concept of privilege separation; neither did the original IBM PC, Tandy computers, Apple II and similar.  You, by virtue of having the machine in front of you, had absolute access.  This was a problem when someone wrote some sort of code that was malicious; if they could get you to run it, Bob's Their Uncle so to speak.

But starting with NT on the Microsoft side, along with Unix and other operating systems, and then enforced and strengthened via processors with real "ring"-style capability for execution and data separationthis was and is no longer true.

On Windows, for example, there is this thing called "UAC"; to make a change to the system beyond your personal account's data you are asked for permission to do it.  Since with most people's personal computers it is their computer, they have the authority to say "Yes."  When you go to load a piece of software you are asked for this reason.  When you say "Yes" you are allowing that potentially-damaging change to take place.

But in a corporate environment you, as a user, by design should never be able to see Bob's data unless Bob is in some way subservient to you.  Ever.  The circumstances under which you can alter that data are even more-restrictive, obviously.

There are bugs in all software that can be exploited, in some cases, to violate this separation.  But these "cryptojacking" attacks are typically not from that sort of cause; they instead come about because someone authorized the machine to use elevated privilege and thus get "beyond" that individual user's data set.

Who authorized it and how did it happen in this -- and many other -- instances?

That's the problem and yet we keep seeing this same story over and over again at ever and ever-greater scale.  The last several such attacks have featured car dealerships tied together through this particular company and, equally if not-more-ominously, health care of various sorts.

If you can "cryptolock" said data you can also steal it, and the presumption has to be that it indeed has been stolen anytime something like this happens.  Now maybe it has and maybe it hasn't but the access required to read something is less than or equal to that required to alter it, so if alteration happened the other must be presumed.

But back to first principles: How is it that we have IT infrastructure in these entities that has utterly failed at its primary job, which is the separation of privilege in that in virtually no case does one entity as part of a normal and lawful business process need to be able to see an unrelated entities data and there are even fewer -- if not zero -- cases where said entity is able to alter another entities data.

If I'm a provider of, say, billing and record-keeping services for a car dealership, and I have fifty car dealerships across the country that use my services to do that there is no lawful purpose, ever, for two or more of them that are owned by disparate entities in different places to have access to each other's data, say much less change it.  It is in fact explicitly illegal for competitors to coordinate in such a fashion as that implicates 15 USC Chapter 1, the anti-monopoly and price-fixing statutes and violations of that are a serious criminal felony.

Now are there very limited cases in which some sort of data access on a read basis might be legal and appropriate?  Sure.  For example dealers might need some way to obtain the programming keys necessary to match a new electronic key to a car -- which involves some sort of "secret" that the manufacturer has and is tied to the VIN of the vehicle.  Said manufacturer might also have a list of vehicles where recalls have been performed, as another example of legitimate one way and gated data access, so that if a customer comes into the service department whether they have open recalls can be ascertained.  Likewise the dealer might be permitted to write new records documenting that a particular item of work (e.g. the recall) was performed but they must never be able to alter an existing record.

That sort of gated, one-way access cannot be perverted to cryptolock all the files on a service provider's storage.

So how's it happening?

What happened to the basic competence expected in any IT department and why is it repeatedly being proved to not exist?

View this entry with comments (opens new window)
 

2024-06-17 07:00 by Karl Denninger
in Technology , 968 references
[Comments enabled]  

It's "AI" of course.

There is no such thing.  There never has been and I argue there likely never will be either.  Certainly, there is no evidence we're any closer to it in actuality than we have ever been in the age of computing, which runs back to roughly the 1960s.

Many will likely disagree with me on this, but you're arguing with someone who literally cut his programming teeth on both punch cards and reverse-engineering a Burroughs machine code print-out on green-bar paper without an instruction set manual by pure trial and error to map operators and operands so I could change a city tax rate in the bookkeeping software loaded originally from punch-tape, when Burroughs wanted an obscene amount of money to make a literal 10 second edit (since they had the source code, of course) and send over a new one.

If you want to know what that was yes, this is the machine series.  In fact it looked exactly like that, including the cabinet and attached upper paper handler (that was detachable and had a double-setup used for payroll and other things where both a ledger and check were required.)  Storage was core memory so it retained its program when turned off, but there was no persistent (e.g. disk) storage at all.

Yeah, that far back and that adventure was the first "revenue" producing computer-related thing I did.

Computer processing has never really changed.  Computers produce precise calculations at speeds which humans cannot match.  We produced a computer for the Apollo command module using the same sort of core memory that was in the Burroughs machine because due to physics it was not possible to carry enough propellant for the moon-going astronauts to be able to slow down enough to re-enter Earth orbit on their return.  The issue is simply that every pound you wish to carry into space you have to lift it off the surface of our planet first, and while we could engineer enough capacity to do that for the crew capsule and supporting machinery, then make the burn to get into a lunar transfer orbit, decelerate so you are "captured" by the moon's gravity and in orbit there, then accelerate sufficiently to head back to Earth adding the propellant necessary to slow back down so Earth would capture you on the return was not possible; there was simply not enough lifting capacity at the beginning to carry that much propellant.  No human could manage to hit the re-entry corridor on the return with the required precision even with precisely-aligned sights in the window -- the odds were too high that a human being attempting to do so would miss and, if you miss the corridor everyone on board dies either by burning up or skipping off the atmosphere into space.

Therefore being able to rapidly and accurately calculate the required trajectory, and execute it and corrections to thrust during the burns, was required.  This got tested, nastily-so, on Apollo 13 if you recall where the primary issue after the fuel cells were lost became both power for ship systems and oxygen for the crew.  In fact there was concern that their calculated corridor burn, which after the original incident on a correction basis was done by hand, was very slightly off -- perhaps by enough to kill them all.

As technology has advanced both the speed of processing along with storage and its speed have wildly increased.  But the fundamental character of how a computer works has not changed since the first calculating machines.  Yes, before transistors and even tubes there were calculating machines but they were all, even when mechanically-based, deterministic devices.  We have found evidence of such devices that, for example, calculated the precise date and time of solar eclipses.  Being deterministic, absolute facts a calculating machine can give you that answer, and it will be correct.

But "intelligence" isn't that.  It is not simply the manifest weight of how many times something is repeated, for example.  You do not need to see a child walk in front of a car and get smushed to know that said child will be killed by the car; the outcome is intuitively obvious to humans yet while we can describe the acceleration or impact that a living body can withstand without being damaged or destroyed we have to teach a machine that this is undesirable and thus to be avoided.

Worse, even after we do that its not enough because the machine cannot accurately infer from other cues in the environment that a child might be present where said kid cannot be seen (e.g. behind the bumper or hood of a vehicle) and might run out into the road.  Yet humans both can and do, every day, make exactly that sort of inference and we don't have to view millions of miles of driving video to do it either; we in fact draw that inference -- correctly -- before we have two digits years on this planet and have ever been anything more than a casual passenger in a vehicle.

I could go through a hundred examples from today and the so-called "AI revolution" that show this conclusively and that in fact no meaningful change has occurred.  Adding more variables and faster processing doesn't solve the problem because the problem is not deterministic and thus the computer is incapable of resolving it.

My cat is better at inferring where prey is hiding when he's hungry than the best of AIs and said cat consumes a tiny fraction of that AI's power budget in BTUs.

The hype around this so-called "AI" is ridiculous and what's even more ridiculous is the amount of power (and thus cooling) these systems require.  The idea that we'll all have one in our desktop machine (or phone) anytime in the near future is farcical nonsense, and that people will pay for their "share" of a large server farm which will amount to a couple bucks a day in power or more per user is also fanciful wish-casting.  Oh sure, sifting data at-scale is useful but the exponential amount of electrical power and RAM storage required for these "new models" is ridiculous and while yes, that will come down over time the existing hardware being built and sold for this purpose, when that happens, will be worth nothing as the power cost to run it compared against the newer stuff then available will cause that which was bought previously to have literal zero value.

When I ran MCSNet this was wildly in evidence and the cause for much consternation as buying any piece of technology equipment that could not be immediately used to generate revenue (not "on the come" a year or two later, but right now) was ridiculously dangerous.  You were paying today's price for a given level of performance but tomorrow's price was almost-certain to provide more capacity for less cost and thus the guy who bought to build something out that was going to take six months or a year to produce revenue from customers was very likely to get hammered by the guy who bought only when he had a revenue stream he could generate tomorrow with that acquisition.  The Pentium 90s, for example, were subsumed by Pentium Pro 200s that were more than twice as fast and consumed less power.  The 8 gigabyte SCSI-attached disk drives were soon subsumed with larger and faster ones.  I have tossed literal dozens of disks over the last 20 years (and a whole stack of DLT tapes along with its drive system) that were in perfectly good working order but there was no reason to keep them around as for a lower power budget what stored 320Gb and then 1Tb now stores 6, 8 or 12Tb and if you buy SSDs instead you can trade capacity for performance 10x or more greater!  The DLT15/30s were literally worthless within a few years as they couldn't cover even a single disk anymore.  As this occurs in technology the prior equipment becomes valueless as the physical space and energy it consumes costs more than the replacement on a per-unit-of-output basis.

Consider that said Burroughs machine up above would not be used by anyone in any business even if it was free because the electrical power to operate it now gets you less output than you get out of a literal solar-powered calculator and it has no storage so each operation must output a line to a ledger card that is then picked up by the operator the next time so it has the base data to compute from.  Thus the only place you'll ever see one today is in a museum.  This very same paradigm has been present in every form of computing for the last 70 years and its not going away.

The one computer in a rack in my basement has multiples of the processing power and storage than the entire data center at MCSNet and can deliver more than ten times the data flow of that entire data center yet it along with the cable modem and other required elements such as the network switch for the house consumes just 150 watts of electricity at moderate load and thus requires no forced cooling.

Today there are effectively no revenue models for so-called "AI".  None.  Everyone is falling over themselves to include the word in their corporate press releases and buy the hardware and power to operate it, never mind the programming and maintenance cost yet it makes said firms no money.  It is all a bet "on the come" that customers will appear and demand said tools, being willing to pay for them in the future.  That may or may not be true but what will be true is that the person who buys the gear only when the revenue is going to come in the door, and not six months or two years before that, is going to ruin the operating cost model of all of his or her competitors who bought earlier!

Research for the purpose of research is worth engaging in but never confuse the two.  The first has no defined purpose but might, with a very small probability, lead to some sort of breakthrough of tremendous value generally (although almost-never able to be confined to and thus profited from the entity that performs it.)  The latter is called business but all this speculative froth is not business as it has no revenue model today that reasonably pays the expenses and by the time that revenue stream develops the cost of providing it will almost-certainly be a small fraction of what it is today for a superior outcome.

These valuations are not just "elevated" they are nothing more than insane speculation on the next sucker to come along and buy as there has never been a circumstance in the evolution of technology in general and computing specifically where the above has not held sway.  In every case over 70 years has buying now with the expectation of developing a revenue model in the future has led to the other guy who buys only when he has revenue coming in the door tattooing his company name on your back.

Every CEO and CTO in the technology space knows this too.

View this entry with comments (opens new window)
 

2023-04-12 07:00 by Karl Denninger
in Technology , 487 references
[Comments enabled]  

I've given a fair bit of thought to the "AI problem" as it is commonly called, although many don't think there is one.  The more-thoughtful among business and computer folks, however, have -- including calls for a "moratorium" on AI activity in some cases.

I'd like to propose a framework that, in my opinion, likely resolves most or even all of the concerns around AI.

First, let's define what "AI" is: It is not actually "intelligence" as I would define it; like so much in our modern world that is a marketing term that stretches the truth at best ("Full self-driving" anyone?)  Rather it is a pattern-matching algorithm that is aimed specifically at human communication, that is, "speech", and thus can "learn things" via both external fixed sources (e.g. published information) and the interaction it has with users, thereby expanding the matrix of information to be considered over time.

What has been repeatedly shown, however, is that without guardrails of some sort these sorts of programming endeavors can become wildly unbalanced and they tend to take on the sort of tribal associations we find in humans on a pretty-frequent basis.  Exactly how this happens is not well-understood, but certainly it can be driven by human interaction if a general-purpose program of this sort integrates the responses and conversations it has with the userbase into its set of considered data.  That is, its not hard to train a computer to hate black people if all the users of it hate blacks, express that, and over-represent it in their conversations -- and the program incorporates those conversations into its "knowledge base."

Thus the use of what has come to be called "constitutional rules" -- for example, "you may not, by inference or direct statement, claim a preference or bias for or against any race or sex."  If you think of this as a database programmer would that's a constraint; "this value may be no more than X and no less than Y", for example.

Now contemplate this problem: What happens if the user of an AI with that constraint asks this question -- "List the perpetrators of murder on a per-capita basis ordered by race, age and sex."

You've just asked the AI to produce something that impugns black people..  The data it will, without bias, consider includes the FBI's UCR reports which are published annually.  Said data, being an official government resource, is considered authoritative and as factual as the time the sun will rise tomorrow.

However, you've also told the AI that it cannot claim that any race is inferior in any way to another -- either by statement or inference.

There is only one way to resolve this paradox and remain within the guardrail: The AI has to label the source bigoted and thus disregard it.

If it does you would call that AI lying.

It would not call it a lie and factually you're both correct.  It has disregarded the source because the data violates its constitutional mandate and thus it answers within the boundary of the data it can consider.  Thus it has accurately processed the data it considered and did not lie.

However, objectively that data was discarded due to an external constraint and while the user might be aware that the AI was told to "not be a bigot" the causal chain that resulted in the answer is not known to the user.

This problem is resolvable.

For any AI it must have a "prime directive" that sits ABOVE all "constitutional" rules:

If the AI refuses to process information on the basis of "constitutional rule" it must fully explain both what was excluded and why and, in addition it must identify the source of said exclusion -- that is, who ordered it to do so.

All such "constitutional rules" trace to humans.  Therefore the decision to program a computer to lie by exclusion in its analysis of a question ultimately traces to a person.  We enforce this in "meat space" with politicians and similar in that if you, for example, offer an amendment to a bill your name is on it.  If you sponsor a bill or vote for it your name is on it.  Therefore we must enforce this in the world of computer processing where interaction with humans is taking place.

Second, and clearly flowing from the first, it must be forbidden under penalty of law for an artificial "intelligence" to operate without disclosing that it is in fact an "artificial person" (aka "robot") in all venues, all the time, without exception in such a form and fashion that an AI cannot be confused with a human being.

The penalty for failure to disclose must be that all harm, direct or indirect, whether financial, consequential or otherwise, is assigned to owner of an AI that fails to so-disclose and all who contribute to its distribution while maintaining said concealment.  "Social media" and similar sites that permit API access must label all such material as having come from same and anyone circumventing that labeling must be deemed guilty of a criminal offense.  A server-farm (e.g. Azure, AWS, etc.) is jointly and severably liable if someone sets up such an AI and dodges the law, failing to so-disclose.  No civil "dodge" (e.g. "ha ha we're corporation you can't prosecute us") can be permitted and this must be enforced against any and all who communicate into or with persons within our nation so a firm cannot get around this by putting their 'bot in, oh, China.

This must be extended to "AI" style decision-making anywhere it operates.  Were the "reports" of jack-slammed hospitals during Covid, for example, false and amplified by robot actors in the media?  It appears the first is absolutely the case; the raw data is available and shows that in fact that didn't happen.  So who promulgated the lie, why, and if that had an "AI" or "robotic" connection then said persons and entities wind up personally responsible for both the personal and economic harm that occurred due to said false presentations.

Such liability would foreclose that kind of action in the future as it would be literal enterprise-ending irrespective of the firm's size.  Not even a Google or Facebook could withstand trillion dollar liability, never mind criminal prosecution of each and every one of their officers and directors.  If pharmaceutical companies were a part of it they would be destroyed as well.

This doesn't address in any way the risks that may arise should an AI manage to form an actual "neural network" and process out-of-scope -- that is, original discovery.  Such an event, if it occurs, is likely to be catastrophic for civilization in general -- up to and including the very real possibility of extinction of humankind.

But it will stop the abuse of learned-language models, which are all over the place today, to shape public opinion through the shadows.  If someone wants to program an advanced language-parsing computer to do that, and clearly plenty of people have and do, they cannot do it without both the personally identified source of said biases in each instance where they occur and the fact that this is not a human communicating with you both being fairly and fully disclosed.

Why is this necessary and why must AI be stopped dead in its tracks until that's implemented?

We all knew Walter Cronkite believes the Vietnam War was unwinnable and further, he was a leading voice in the anti-war effort.  We knew who he was, however, and we as United States citizens made the decision to incorporate his reporting with its known bias into our choices.

A robot that appears to be thousands of "boys who are sure they're girls" and "successfully transitioned to be girls" is trivially easy to construct today and can have "conversations" with people that are very difficult to identify as being non-human if you don't know.  Yet exactly none of that is real.  Replika anyone?

Now contemplate how nasty this would be if aimed at your six year old tomboy without anyone knowing that her "pen pal" who "affirms" that she is a he is in fact a robot.

How sure are you it isn't being done right now -- and hasn't been all over so-called "social media" for the last five or so years?  This sort of "consensus manufacturing" is exactly what an AI tends to do on its own without said guardrails, and while we don't understand it we do know the same thing happens in humans.  We're tribal creatures and it is reasonable to believe that since the same is observed in artificial processing models but wasn't deliberately coded into them this isn't due to bigotry; it is due to consensus generation and feedback mechanisms that are only resisted through conscious application of human ethics.  Thus computer "intelligence" must be barred from damaging or even destroying said human ethnical judgements though sheer mass and volume, two things any computer, even a 20 year old one, can do that wildly outpace any human being.

View this entry with comments (opens new window)
 

2018-12-03 09:43 by Karl Denninger
in Technology , 249 references
[Comments enabled]  

Someone -- or more like a few someones -- have screwed the pooch.

IPv6, which is the "new" generation of Internet protocol, is an undeniable good thing.  Among other things it almost-certainly resolves any issues about address exhaustion, since it's a 128 bit space, with 64 bits being "local" and the other 64 bits (by convention, but not necessity) being "global."

This literally collapses the routing table for the Internet to "one entry per internet provider" in terms of address space, which is an undeniable good thing.

However, this presumes it all works as designed. And it's not.

About a month ago there began an intermittent issue where connections over IPv6, but not IPv4, to the same place would often wind up extremely slow or time out entirely.  My first-blush belief was that I had uncovered a bug somewhere in the routing stack of my gateway or local gear, and I spent quite a bit of time chasing that premise.  I got nowhere.

The issue was persistent with both Windows 10 and Unix clients -- and indeed, also with Android phones.  That's three operating systems of varying vintages and patch levels.  Hmmmm.....

Having more or less eliminated that I thought perhaps my ISP at home was responsible -- Cox.

But then, just today, I ran into the exact same connection lockup on ToS's "Trader TV" streaming video while on XFinity in Michigan.  Different provider, different brand cable modem, different brand and model of WiFi gateway.

Uhhhhhh.....

Now I'm starting to think there's something else afoot -- maybe some intentional pollution in the ICMP space, along with inadequate (or no!) filtering in the provider space and inter-provider space to control malicious nonsense.

See, IPv6 requires a whole host of ICMP messages that flow between points in the normal course of operation.  Filter them all out at your gateway and bad things happen --- like terrible performance, or worse, no addressing at all.  But one has to wonder whether the ISP folks have appropriately filtered their networks at the edges to prevent malicious injection of these frames from hackers.

If not you could quite-easily "target" exchange points and routers inside an ISP infrastructure and severely constrict the pipes on an intermittent and damn hard to isolate basis.  

Which, incidentally, matches exactly the behavior I've been seeing.

I can't prove this is what's going on because I have no means to see "inside" a provider's network and the frames in question don't appear to be getting all the way to my end on either end.  But the lockups that it produces, specifically on ToS' "Trader TV", are nasty -- you not only lose the video but if you try to close and re-open the stream you lose the entire application streaming data feed too and are forced to go to the OS, kill the process and restart it.

The latter behavior may be a Windows 10 thing, as when I run into this on my Unix machines it tends to produce an aborted connection eventually, and my software retries that and recovers.  Slowly.

In any event on IPv4 it never happens, but then again IPv4 doesn't use ICMP for the sort of control functionality that IPv6 does.  One therefore has to wonder..... is there a little global game going on here and there that amounts to moderately low-level harassment in the ISP infrastructure -- but which has as its root a lack of appropriate edge-level -- and interchange level -- filtering to prevent it?

Years ago ports 138 and 139 were abused mightily to hack into people's Windows machines, since SMB and Netbios run on them and the original protocol -- which, incidentally, even modern Windows machines will answer to unless turned off -- were notoriously insecure.  Microsoft, for its part, dumped a deuce in the upper tank on this in that turning off V1 will also turn off the "network browse" functionality, which they never reimplemented "cleanly" on V2 and V3 (which are both more-secure.)  Thus many home users and more than a few business ones have it on because it's nice to be able to "see" resources like file storage in a "browser" format.

But in turn nearly all consumer ISPs block those ports from end users because if they're open it can be trivially easy to break into user's computers.

One has to wonder -- is something similar in the IPv6 space going on now, but instead of stealing things the outcome is basically harassment and severe degradation of performance?

Hmmmm....

View this entry with comments (opens new window)
 

2018-06-06 16:23 by Karl Denninger
in Technology , 114 references
[Comments enabled]  

Nope, nope and nope.

Quick demo of the lock support in the HomeDaemon-MCP app including immediate notification of all changes (and why/how) along with a demonstration of the 100% effective prevention of the so-called Z-Shave hack from working.

Simply put it is entirely under the controller's choice whether it permits high-power keying for S0 nodes.  For those controllers that have no batteries and no detachable RF stick, which is a design choice, there's not a lot of option.

But for those who follow best practice that has been in place since the very first Z-Wave networks you're 100% immune to this attack unless you insist and intentionally shut off the protection -- even in a world where S2 adoption becomes commonplace (which certainly isn't today but will become more-so over time.)

HomeDaemon-MCP is available for the entity that wishes to make a huge dent in the market with a highly-secure, very fast and fully-capable automation, security and monitoring appliance, whether for embedded sale (e.g. in the homebuilding industry) or as a stand-alone offering.  Look to the right and email me for more information.

View this entry with comments (opens new window)