Oh boy.
For more than forty years I've been a participant in the "better, faster cheaper" paradigm of technology. It has never failed to occur; you need only look at the cost of disks (and their performance) over that period of time for examples. My first "Winchester" (hard disk) was a five megabyte unit -- the physical drive itself was the size of a full-height 5-1/4" floppy and it required a logic board off from it about the size of an 8-1/2x11 piece of paper to drive the interface. Oh, and it cost something like $2,000 (with the interface) too which in early 80s money was a lot.
That was megabytes folks.
In the 1990s 8 gigabyte SCSI fixed disks were the biggest ones you could reasonably afford, hundreds of dollars each, and required a $500 interface card for the computer to work. My Internet company had a whole bunch of them. They replaced 1, 2 and then 4Gb drives.
Today I have discarded 2 and 3 terabyte drives as no longer useful. My "minimum" size for spinning storage, that is the size that's actually worth taking up the room in the machine and providing power to it -- which is really only for things you access relatively infrequently in a world of SSDs, which of course we now all have too and are 100x faster for random access as there is no seek delay -- is about 6Tb.
25 years ago, roughly, I moved to Florida. When I got there I implemented code to run my house; it ran on my "main" server in the utility space and worked very well. About ten years or so later I got tired of the idea that it required that sort of resource and re-wrote the entire thing from zero with the primary goal being tight and efficient code. It runs on a $25 computer and not only does everything the old code did but also handles secure streaming of camera data, plus two-way audio and only consumes about 20% of that machine's CPU. At the same time when I wired the house in Florida 100Mbps was the fastest reasonably affordable while today my desktop has a 2.5Gbps link as does my WiFi access point, and the server's main link to all of that stuff clocks at 10Gbps.
While hardware tends to improve in terms of "doubles" don't kid yourself that "step function" changes like this in software can and do come from thinking about the problem in a different way, often by one dude, can't happen. Yes they can and yes they do and I've implemented a few of them personally. One of them was at MCSNet when I re-implemented the entire customer-management system across one coffee-filled sleepless weekend because the old system we were using was simply running out of capacity to reasonably handle the load demanded of it as the company expanded.
Are DeepSeek's claims real?
I don't know.
But if they are then all the firms that are currently blowing tens of billions on huge data centers, gigawatts of power consumption and employing huge groups of developers are in serious trouble; they have bet on a bankrupt business model and billions worth of hardware in support of it that has a resale and use value under the paradigm DeepSeek has demonstrated of approximately equal to a used beer can because the power and human resource required to run it is a money loser even if you could give away the results.
Time will tell but if you think across the business space where everything that has spoke the two letters "AI" has shot the moon in valuation over the last few years will not see a collapse far worse than 2000 if this proves up you're certifiable. Even if those firms take the public domain code and run with it, and they will if they become convinced its real, from a standpoint of sunk cost and zero-value expenditures made to date every one of these companies is in serious competitive and quite-possibly financial trouble.
Place your bets gentlemen.