The landauer limit at 4.2K is 4.019×10^-23 J (joules). So this is only a factor of 38x away from the landauer limit.
Admittedely, I haven't done much reading, but I see it is a linked page from Bremermann's Limit: https://en.wikipedia.org/wiki/Landauer%27s_principle
This applies to any irreversible computation.
IMO, The fact that it's only 38x the minimum is MIND BLOWING.
It's like if someone made a car that drives at 1/38th the light speed.
28.4 million km per hour (i.e. 17.6 million miles per hour)
I wonder how much that speeding ticket would cost.
Disclaimer: Assuming the one-way speed of light is 300k km/s
Depends on the frame the officer is viewing you from, I suppose.
I once got one for going at about 3x the speed limit (very nice road in Brazil, broad daylight, nobody in sight, short run, and very unfortunate radar gun positioning). The policeman was impressed and joked that he would like to, but couldn't give me 3 speeding tickets instead of one.
It doesn't run at room temp. The difference is much larger than 38x. It's closer to 1000x.
No it's not. It's linearly dependent on temperature. The figure commonly stated is arbitrarily chosen to be at room temperature.
Note that the gates themselves used here are reversible, so the limit shouldn't apply to them. But the circuits built from them them aren't reversible as far as I can see in the paper, so it would still apply to the overall computation.
At room temperature.
Unless the formula at  is wrong, in which case my calculations would also be wrong.
Considering that "data centers alone consume 2% of world's enegy", I think it's worth it.
The trend seems to be that we get only a little bit of extra utility out of a lot of extra hardware performance.
When the developer upgrades their PC it's easier for them to not notice performance issues. This creates the situation where every few years you need to buy a new PC to do the things you always did.
"hardware giveth, and software taketh away"
Andy Grove and Bill gates
Another reason is upgrades to software, which maintain general bloat, and which is hard to control; new hardware is easier. That's however is very noticeable.
On top of that, just "better" hardware - say, in a decade one can have significantly better screen, more cores and memory, faster storage; makes easier for large software tasks (video transcoding, big rebuilds of whole toolchains and apps, compute-hungry apps like ML...)
This is a frustrating part of recent laptops, but it doesn't have to be this way - my X230's keyboard is removable with a few screws.
I'm not sure if that's the case, but it may be we aren't looking for utility in the right places.
I have two points to comment on this matter.
Point 2: If the inefficiency arises out of more actual computation being done, that's a different story, and I AM TOTALLY A-OK with it. For instance, if Adobe Creative Suite uses a lot more CPU (and GPU) in general even though it's written in C++, that is likely because it's providing more functionality. I think even a 10% improvement in overall user experience and general functionality is worth increased computation. (For example, using ML to augment everything is wonderful, and we should be happy to expend more processing power for it.)
So like 2009 compared to 2021? Based on that, I'd say even more inefficient webshit.
And no, software efficiency isn't even the main factor. Not even close.
Just a few pointers: polling on peripherals instead of interrupts (i.e. USB vs. PS/2 and DIN) introducing input lag, software no longer running in ring-0 while being the sole process that owns all the hardware, concurrent processes and context switches, portability (and the required layers of abstraction and indirection), etc.
It's a bit cheap to blame developers while at the same time taking for granted that you can even do what you can do with modern hard- and software.
Everything comes at a price and even MenuetOS  will have worse input lag and be less responsive than an Apple II, simply because you'll likely have USB keyboard and mouse and an LCD monitor connected to it.
Big projects would pretty much kill it and the occasional crash was to be expected.
Sure, firing up VS6 on more modern hardware let it fly by comparison, but then again its features paled in comparison to those available with modern VS.
On the other hand, I don't use VS anymore, since VSCode is all I need and runs faster than VS6 back then (even on my 5 year old mid-range laptop) so no complains there.
You could load and start an entire game on that thing (albeit from a ROM cartridge) in less time than a 2000-era PC took to just POST :D
So PCs were a regression in performance in that regard compared to 1980s home computers and micros.
Compared to my recent company laptop which needs at least 2 min. Even more before I can work productively. For Windows 10.
It's definitely not dependent on hardware. Even not on functionality (I do the same on both machines).
And you can't just make the walls reflective once the cold object gets smaller than the wavelength of the radiation. The colder the object, the longer that wavelength.
Souce. I have a PhD in physics where I used equipment cooled to 4K.
I can see demand in areas like graphics. Imagine real-time raytracing at 8K at 100+ FPS with <10ms latency.
Just wait 10 years?
If you take those out, there is a very clear stagnation on that graphic.
Look at the actual transistor sizes. In 2009 we were at 32 nm. We're now in 2021, so if transistor sizes had kept halving every two years we would be at 0.5 nm. Clearly, we are not anywhere close to that--we're off by a factor of 10, and that's only with the very latest and greatest manufacturing processes that almost no consumer chips use (not to mention that the 5nm process used by AMD is not the same as a 5nm process used by Intel). As the article itself notes:
> Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, below the pace predicted by Moore's law.
Of course transistor companies are happy to claim that they are secretly keeping pace, but in terms of commercially available microprocessors it is unquestionably false. Anyone using the "doubling every two years" approximation to decide how much more computing power is available now than 10 years ago, or how much more will be available 10 years in the future, is not going to arrive at correct figures.
This can equate to a faster chip because you now can do more at once. However, we hit the frequency limits a while ago for silicon. Particularly, parasitic capacitance is a huge limiting factor. A capacitor will start to act like a short circuit the faster your clock is.
Moore's law has a little more life, although the rate seems to have slowed. However, at the end of the day it can't go one forever you can only make something so small. One gets to a point they have so few atoms to constructing something useful becomes impossible. Like current transistors are finFETs because the third dimension gives them more atoms to reduce leakage current, compared to the relatively planar designs on older process nodes. However, these finFets still take up less area on a die.
UI's will have physically based rendering and interaction.
Before this happens, I recommend having your exit strategy for the industry, living off whatever profits you made working as a developer during the early 21st century.
What’s truly more of a threat is AI-aided programming if that ever becomes a thing. Again, I’m not worried. The gap between telling an AI “do something that makes me $1 billion dollars” and “write a function that has properties x/y/s” or “do this large refactor for me and we’ll work together on any ambiguous cases”, is enormous. So you’ll always have a job - you’ll just be able to do things you couldn’t in the past (it’s questionable whether an AI can be built that generates programs from vague/poorly defined specs from product or even that generates those specs in the first place.
As an obvious counter example to your theory, we have CPUs that are probably 10000x more powerful than in 1980 (actually more if you consider they have processing technologies that didn’t even exist back then like GPUs and SIMD). The software industry is far larger and devs make more individually.
Technically SIMDs and GPUs existed back then but in a much more immature form, being more powerful, cheaper and widespread today than what was available in the 80s.
In my own experience performance optimization is an important but infrequent part of my job. There are other skills an elite programmer brings to the table like the ability to build a mental model of a complex system and reason about it. If downward pressure on wages occurs I think it will be for another reason.
But architecting complex systems so that they are maintainable, scalable, and adaptable... there's not gonna be enough cheap computation to solve that problem and omit top talent for a long time.
If you gave me a processor that could run instructions instantly, my product's release would at best be brought forward 1-2 weeks.
The largest efforts in programming are to do with translating requirements into code. Efficiency is a part of that, and there are obviously problems where it dominates, but there are many other difficulties even when it isn't.
Once this can be done at the temperature of liquid nitrogen, that will be a true revolution. The difference in cost of producing liquid nitrogen and liquid helium is enormous.
Alternatively, such servers could be theoretically stored in the permanently shaded craters of the lunar South Pole, but at the cost of massive ping.
Soon I realized I’d need the computers’ clocks to be as in-sync as possible, which in turn requires one-way latency calculations. I spent an hour diagramming with pencil and paper until I convinced myself it was mathematically impossible or I wasn’t clever enough to find the solution.
Looking back, I was more worried than I should’ve been about this (wired latencies are usually far less than 5ms, which for sound is ~1.5m) and less worried than I should’ve been about the fact that thunder isn’t anything close to a point source of sound.
Latency caused by the speed of light is a tangible and impactful factor in our lives!
It's amazing that something so incomprehensible (3x10^8 m/s) can actually be experienced.
Quick google search yields: $3.50 for 1L of He vs $0.30 for 1L of H2. So roughly 10 times more expensive.
Plus, liquid helium is produced as a byproduct of some natural gas extraction. If you needed volumes beyond that production, which seems likely if you wanted to switch the world's data centers to it, you'd be stuck condensing it from the atmosphere, which is far more expensive than collecting it from natural gas. I haven't done the math. I'm curious if someone else has.
Initial cost is high but running it should mostly just incur energy costs if it’s well designed.
I was able to find "1 gallon of liquid nitrogen costs just $0.5 in the storage tank". That would be about $0.12 per 1L of N2.
Nitrogen won't get you below 10°K, tho. It's solid below 63°K (-210°C).
You know things are getting expensive, when superconductors are rated "high temperature", when they can be cooled with LN2...
Helium (He2) is _practically finite, as we can't get it from the atmosphere in significant amounts (I think fusion reactor may be a source in the future), and it's critically important for medical imaging (each MRI 35k$/year) and research. You also can really store it long term, which means there are limits to retrieval/recycling, too. I sincerely hope we won't start blowing it away for porn and Instagram.
On the moon you have no atmosphere to do it with radiators with fans, so I gues you would have to make huge radiators which simply emit the heat away as infrared radiation?
Exactly. You can still transport the heat efficiently away from the computer using heat exchangers with some medium, but in the end radiators with a large enough surface area will be required.
Works well enough on the ISS, so I imagine it'd work just as well on the Moon.
Radiative heat loss scales with the fourth power of temperature. I don't know what temperature the ISS radiators are but suppose they are around 300K. Then I think the radiative surface to keep something cool at 10K would need to be 30^4, or 810000 times larger per unit heat loss. So realistically I think you would need some kind of wacky very low temperature refrigeration to raise the temperature at the radiator, and then maybe radiate the heat into the lunar surface.
And that only if you use the soil for cooling, which is non-renewable resource. If you use radiators, then you can put them on a satellite instead with much lower ping.
Who volunteers as the original?
During stable price periods, the power/performance of cryptocurrency miners runs right up to the edge of profitability, so someone who can come in at 20% under that would have a SIGNIFICANT advantage.
> in principle, energy is not gained or lost from the system during the computing process
Landauer's principle (from Wikipedia):
> any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information-bearing degrees of freedom of the information-processing apparatus or its environment
Where is this information going, inside of the processor, if it's not turned into heat?
However, in order to read out the output of computation, or to clear your register to prepare for new computation, you do generate heat energy and that is Landauer's principle.
In other words, you can run a reversible computer back and forth and do as many computations as you want (imagine a perfect ball bouncing in a frictionless environment), as long as you don't read out the results of your computation.
 NOT gate is reversible, and you can create reversible versions of AND and OR by adding some wires to store the input.
I'm having trouble interpreting what exactly that means though.
After reading the output, you run the entire computation backwards (it's reversible after all) until the only state in the system is the original input and initial state.
Then you can change the input to do another calculation.
If implemented "perfectly", heat is produced when changing the input and to read the output.
Reading the output will disturb it so the computation won't perfectly reverse in practice, however if the disturbance is small, most of the energy temporarily held in circuit is supposed to bounce back to the initial state, which can then be reset properly at an appropriately small cost.
Quantum computation is very similar to all of this. It too is reversible, and costs energy to set inputs, read outputs and clear accumulated noise/errors. The connection between reversible computation and quantum computation is quite deep.
Someone, somewhere, will adopt a Ting-type model where you pay for your compute per cycle, or per trillion cycles or whatever, with a small connection fee per month. It'll be broken down into some kind of easy-to-understand gibberish bullshit for the normies.
In short, it'll create another circle of Hell for everyone - at least initially.
The upfront costs of a cryocooler, spread out over the usable lifetime of the cryocooler (they're mechanical, they wear out), vastly exceeds the cost of electricity you save by switching from CMOS to JJs. Yes, I did the math on this. And cryocoolers are not following Moore's Law. Incredibly, they're actually becoming slightly more expensive over time after accounting for inflation. There was a LANL report about this which I'm trying to find, will edit when I find it. The report speculated that it had to do with raw materials depletion.
All of the above I'm quite certain of. I suspect (but am in no way certain) that the energy expended to manufacture a cryocooler also vastly exceeds the energy saved over its expected lifetime as a result of its use. That's just conjecture however, but nobody ever seems to address that point.
 Including non-conductors... but you need a lot of voltage! <g>
This comment might get buried but I'd just like to mention a few things:
- Indeed, we took into account the additional energy cost of cooling in the "80x" advantage quoted in the article. This is based on a cryocooling efficiency of 1000 W at room temperature per Watt dissipated at cryotemps (4.2 Kelvin). This 1000W/W coefficient is commonly used in the superconductor electronics field. The switching energy of 1.4 zJ per device is quite close to the Landauer limit as mentioned in the comments but this assumes a 4.2 K environment. With cryocooling, the 1000x factor brings it to 1.4 aJ per device. Still not bad compared to SOTA FinFETs (~80x advantage) and we believe we can go even lower with improvement in our technology as well as cryocooling technology. The tables in Section VI of the published paper (open-access btw) goes on to estimate what a supercomputer using our devices might look like using helium referigeration systems commercially available today (which have an even more efficient ~400W/W cooling efficiency). The conclusion: we may easily surpass the US Department of Energy's exascale computing initiative goal of 1 exaFLOPS within a 20-MW power budget, some thing that's been difficult using current tech (although HP/AMD's El Capitan may finally get there, we may be 1-2 orders of magnitude better assuming a similar architecture).
- Quantum computers require very very low temps (0.015 K for IBM vs the 9.3 K for niobium in our devices). With the surge in superconductor-based quantum computing research, we expect new developments in cryocooling tech which would be very helpful for us to reduce the "plug-in" power.
- Our circuits are adiabatic but they're not ideal devices hence we still dissipate a tiny bit of energy. We have ideas to reduce the energy even further through logically and physically reversible computation. The trade-off is more circuit area overhead and generation of "garbage" bits that we have to deal with.
- The study featured only a prototype microprocessor and the main goal was to demonstrate that these AQFP devices can indeed do computation (processing and storage). Through the experience of developing this chip, it helped revealed the practical challenges in scaling up, and our new research directions are aggressively targetting them.
- The circuits are also suitable for the "classical" portion of quantum computing as the controller electronics. The advantage here is we can do classical processing close to the quantum computer chip which can help reduce the cable clutter going in/out of the cryocooling system. The very low-energy dissipation makes it less likely to disturb the qubits as well.
- We also have ideas on how to use the devices to build artificial neurons for AI hardware, and how we can implement hashing accelerators for cryptoprocessing/blockchain. (all in the very early stages)
- Other superconductor electronics showed super fast 700+ GHz gates but the power consumption is through the roof even before taking into account cooling. There are other "SOTA" superconductor chips showing more Josephson junction devices on a chip... many of those are just really long shift-registers that don't do any meaningful computation (useful for yield evaluation though) and don't have the labyrinth of interconnects that a microprocessor has.
- There are many pieces to think about: physics, IC fabrication, analog/digital design, architecture, etc. to make this commercially viable. At the end of the day, we're still working on the tech and trying to improve it, and we hope this study is just the beginning of some thing exciting.
In practice it will need to interface to external memory in order to perform (more) useful work.
Would there be any problems fashioning memory cells out of Josephson junctions, so that the power savings can carry over to the system as a whole?
Modules that are a few kilobytes in size have already been tested.
Even taking cryogenic operation into account, memory of this type consumes 10-100x less power than CMOS technology at roughly the same clock speed 
This is a very active field of research and there's a plethora of different approaches.
My guess is that 20 years from now there could be three types of computing:
• cryogenic quantum-computing for specialised tasks
• cryogenic ultra-high-performance computing
• high temperature computing (traditional CMOS-based)
with the first two not being available to consumers or small companies. Maybe it's going to be like in the 1970s and early 1980s when mainframes ruled supreme and you would rent these machines or compute time thereon.
People are accustomed to "the cloud" already, so it's not really a regression going back to centralised computing for a bit.
(Haven't read the article, or have any expertise in this field, so I might be wrong)
But that heat has to go somewhere. When you have rooms full of them the power and cooling issues become key in a way that doesn't matter when it's just a PC in your room.
AQFP logic operates adiabatically which limits the clock rate to around 10 GHz in order to remain in the adiabatic regime. The SFQ logic families are non-adiabatic, which means they are capable of running at extremely fast clock rates as high as 770 GHz at the cost of much higher switching energy.
I'd think that should be enough to get quite a few layers, sure maybe some minimal space between layers for cooling, but radically less than the non-superconducting designs.
Edit: I assume a supercoducting microprocessor would use a strategy similar to the AI monolith in . Just fuse off and route around errors on a contiguous wafer and distribute the computation to exploit the dark zones for heat dissipation.
The thickness of the required insulating barrier presents a hard lower limit to the structure size.
The actual value of that limit depends on the material used and the particular implementation of the Josephson junction, of which there seems to be quite a few.
So the limit depends on how thin the barrier can be made.
> The 2.5 GHz prototype uses 80 times less energy than its semiconductor counterpart, even accounting for cooling
> Since the MANA microprocessor requires liquid helium-level temperatures, it’s better suited for large-scale computing infrastructures like data centers and supercomputers, where cryogenic cooling systems could be used.
 - https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=929...
Can anyone give an expert opinion on the difficulty of cooling in space?
Surely due to the massively reduced power usage it would be easier to keep it cool relative to traditional compute
So no, it probably wouldn't be any easier in space.
I miss old school BioWare.
I just remember that that particular planet didn't have much of an atmosphere, so it wasn't good for dumping excess heat from ships.
A much more efficient method of heat removal would just be to have radiating fins. So you're still converting heat into photons, but you lose much less energy in the process, because the photons you're radiating have a higher entropy.
That would save money on the computing power as well as the mining, transportation of raw materials, refining, transportation of refined materials, manufacturing, transportation of finished goods and the whole retail chain.
Unless we achieve room-temperature semiconducting processors, this will only benefit data centres, most of whose power is used to sell stuff. Does anyone actually think that the savings will be passed on to the consumer or that business won't immediately eat up the savings by using eighty times more processing power?
Hey, now we can do eighty times more marketing for the same price!