Microsoft is currently running an interesting set of hardware experiments. The company is taking a souped-up shipping container stuffed full of computer servers and submerging it in the ocean. The most recent round is taking place near Scotland’s Orkney Islands, and involves a total of 864 standard Microsoft data-center servers. Many people have impugned the rationality of the company that put Seattle on the high-tech map, but seriously – why is Microsoft doing this?
There are several reasons, but one of the most important is that it is far cheaper to keep computer servers cool when they’re on the seafloor. This cooling is not a trivial expense. Precise estimates vary, but currently about 5 percent of all energy consumption in the U.S. goes just to running computers – a huge cost to the economy as whole. Moreover, all that energy used by those computers ultimately gets converted into heat. This results in a second cost: that of keeping the computers from melting.
I use a custom watercooling loop to keep my processor and videocard cool, but aside from size and scale, datacenters struggle with the exact same problem – computers generate a ton of heat, and that heat needs to go somewhere.
I’m sure that heat could be used more productively in the Orkneys. I mean, it’s not as if they need air conditioning half the year, just to stay cool. They need heat for their buildings, their water (people love warm water for bathing and showering, and heating water is expensive), if they’ve got so much heat, they could use it to keep their roads and pavements dry in the winter.
Instead, they choose to dump heat into the ocean (potentially harming the local sea life), and then use more energy for extra heat.
And there are data centres set up, just to mine bitcoin. There’s far too much wrong with this world.
Bitcoin is such a ridiculous pile of idiocy…
I read about this a while back, and it’s an interesting exercise. Some aspects of it make sense, others don’t.
But I suspect there are huge efficiencies to be gained in cooling equipment inside traditional data centers without going underwater. I’m imagining a design where every cabinet has a “heatsink backplane” of sorts. And every server that goes into a rack is designed to “plug in” to that backplane in such a way that its heat-generating components interface (thermally) with the backplane to transfer heat away. Then a liquid cooling system could carry the heat away efficiently. It seems like this is bound to be MUCH more efficient than the traditional approach of cooling the entire volume of airspace inside the data center.
[Edit] Apparently this is already on the rise, but I was unaware:
https://www.datacenterknowledge.com/power-and-cooling/five-reasons-d…
I don’t know anything about the specifics of the designs being used. But it seems like the creation of standards for liquid cooling the data center (and adoption of those standards) would be necessary for this to become commonplace.
Edited 2018-10-05 19:41 UTC
Considering that it seems that most of the major server providers are all designing their own systems anyway, standardization is less important.
Most of these companies operate at a scale that it’s efficient for them to design and build (have built) custom infrastructure without, necessarily, regards to standards.
Considering that they’re building data centers from the dirt up, they can have impact on all aspects of the data center.
A more interesting question regarding the Datacenter at the Bottom of the Sea, is what is the operational life span of it? Barring catastrophic failure (i.e. a leak), this system will have no maintenance (I assume). At some point, the internals will have failed enough to where they yank the thing up, jet wash off the barnacles, and recycle the components. Like a battery, it’s life will fade over time, notably as boards fail and storage fails.
Just curious what the productive life span is plotted at. 5 years?
As an aside to Microsoft sinking their datacenter, anybody remember these?
https://en.wikipedia.org/wiki/Sun_Modular_Datacenter
On taking the Alan Turing Path we decided that our computing is logic based. Neural networks aren’t, computationally fresh.
[Lovely Rachel]…
Because they are fancy resistors ™
The issue is that performance per watt in CPUs (vs GPUs) in particular seems to be flat in the last few years.
http://www.karlrupp.net/wp-content/uploads/2013/06/gflops-per-watt-…
As we have crammed more semiconductor components on to a wafer the connecting wires have gotten smaller and therefore have higher resistance. This means they generate more heat for the same data activity. This is also why CPU clock speed has not kept up with Moore’s Law in recent years.
The brain’s computing per watt is off the scale at maybe 10^16 GFLOPs per Watt vs 10 for a conventional CPU. but with very different tech. and it’s hopeless at high speed serial computation.
https://aiimpacts.org/brain-performance-in-flops/
https://hypertextbook.com/facts/2001/JacquelineLing.shtml
We will need both a hardware and a programming paradigm shift to fix this IMO. Meanwhile they should skip the Orkney and head to Iceland for volcanoes and glaciers.
People have wanted that paradigm shift since forever. The alternatives aren’t working and in many cases are less efficient than the standard stored program computer/von Neumann design.
Is why is essentially nobody talking about trying to find ways to recapture this energy?
I mean, there are some companies that are actually using computers as heaters (Cray did this with their old showroom building, IBM offers it for some of their big systems via HVAC integration, and I’m pretty sure there’s a company somewhere in Scandinavia that’s paying people to use their computers as space heaters), but nobody is talking about the possibility of doing anything else with all this energy other than heating or finding a way to get rid of it. It would be interesting to see how much even a 10% efficient energy recapture mechanism might save a big datacenter in terms of power consumption.
It has to do with the laws or thermodynamics, and source/sink/output entropy. You need a large temperature differential between your energy source and sink to efficiently pump out low-entropy energy such as electricity. Warm water from cooling chips, likely below 80C, is far too cold for this. To put it in perspective, it is probably cooler than the leftover condensed steam from a turbine in a power plant.
Conceivably, you could drive a generator by boiling something like freon with the hot water and then move a generator with the expanding gas, then cool it with the cold water to condense it again, but the efficiency would be extremely low. You would produce large quantities of tepid water from cooling the freon and just a little electrical power as useful output.
However, for home heating, the output is high-entropy 20C air, and as you may intuitively know, the process in that case is quite efficient. Flowing 80C water through a radiator heats a home quite well.
Thermodynamics suck, but they are inescapable in this universe. Maybe in parallel universes, should they exist, they could be more favorable. Or not, I don’t hope to ever understand.