Linked by Thom Holwerda on Fri 5th Oct 2018 18:24 UTC
Hardware, Embedded Systems

Microsoft is currently running an interesting set of hardware experiments. The company is taking a souped-up shipping container stuffed full of computer servers and submerging it in the ocean. The most recent round is taking place near Scotland's Orkney Islands, and involves a total of 864 standard Microsoft data-center servers. Many people have impugned the rationality of the company that put Seattle on the high-tech map, but seriously - why is Microsoft doing this?

There are several reasons, but one of the most important is that it is far cheaper to keep computer servers cool when they're on the seafloor. This cooling is not a trivial expense. Precise estimates vary, but currently about 5 percent of all energy consumption in the U.S. goes just to running computers - a huge cost to the economy as whole. Moreover, all that energy used by those computers ultimately gets converted into heat. This results in a second cost: that of keeping the computers from melting.

I use a custom watercooling loop to keep my processor and videocard cool, but aside from size and scale, datacenters struggle with the exact same problem - computers generate a ton of heat, and that heat needs to go somewhere.

Order by: Score:
Wasteful
by No it isnt on Fri 5th Oct 2018 19:06 UTC
No it isnt
Member since:
2005-11-14

I'm sure that heat could be used more productively in the Orkneys. I mean, it's not as if they need air conditioning half the year, just to stay cool. They need heat for their buildings, their water (people love warm water for bathing and showering, and heating water is expensive), if they've got so much heat, they could use it to keep their roads and pavements dry in the winter.

Instead, they choose to dump heat into the ocean (potentially harming the local sea life), and then use more energy for extra heat.

And there are data centres set up, just to mine bitcoin. There's far too much wrong with this world.

Reply Score: 14

RE: Wasteful
by CaptainN- on Mon 8th Oct 2018 19:27 UTC in reply to "Wasteful"
CaptainN- Member since:
2005-07-07

Bitcoin is such a ridiculous pile of idiocy...

Reply Score: 3

monkey7m3
Member since:
2016-09-16

I read about this a while back, and it's an interesting exercise. Some aspects of it make sense, others don't.

But I suspect there are huge efficiencies to be gained in cooling equipment inside traditional data centers without going underwater. I'm imagining a design where every cabinet has a "heatsink backplane" of sorts. And every server that goes into a rack is designed to "plug in" to that backplane in such a way that its heat-generating components interface (thermally) with the backplane to transfer heat away. Then a liquid cooling system could carry the heat away efficiently. It seems like this is bound to be MUCH more efficient than the traditional approach of cooling the entire volume of airspace inside the data center.

[Edit] Apparently this is already on the rise, but I was unaware:

https://www.datacenterknowledge.com/power-and-cooling/five-reasons-d...

I don't know anything about the specifics of the designs being used. But it seems like the creation of standards for liquid cooling the data center (and adoption of those standards) would be necessary for this to become commonplace.

Edited 2018-10-05 19:41 UTC

Reply Score: 3

whartung Member since:
2005-07-06

I don't know anything about the specifics of the designs being used. But it seems like the creation of standards for liquid cooling the data center (and adoption of those standards) would be necessary for this to become commonplace.

Considering that it seems that most of the major server providers are all designing their own systems anyway, standardization is less important.

Most of these companies operate at a scale that it's efficient for them to design and build (have built) custom infrastructure without, necessarily, regards to standards.

Considering that they're building data centers from the dirt up, they can have impact on all aspects of the data center.

A more interesting question regarding the Datacenter at the Bottom of the Sea, is what is the operational life span of it? Barring catastrophic failure (i.e. a leak), this system will have no maintenance (I assume). At some point, the internals will have failed enough to where they yank the thing up, jet wash off the barnacles, and recycle the components. Like a battery, it's life will fade over time, notably as boards fail and storage fails.

Just curious what the productive life span is plotted at. 5 years?

As an aside to Microsoft sinking their datacenter, anybody remember these?

https://en.wikipedia.org/wiki/Sun_Modular_Datacenter

Reply Score: 3

v Architecture...
by dionicio on Sat 6th Oct 2018 00:14 UTC
RE: Architecture...
by dionicio on Sat 6th Oct 2018 00:19 UTC in reply to "Architecture..."
dionicio Member since:
2006-07-12

[Lovely Rachel]...

Reply Score: 0

Comment by p13.
by p13. on Sun 7th Oct 2018 06:57 UTC
p13.
Member since:
2005-07-10

Because they are fancy resistors (tm)

Reply Score: 2

Performance per watt
by Iapx432 on Sun 7th Oct 2018 16:29 UTC
Iapx432
Member since:
2017-09-30

The issue is that performance per watt in CPUs (vs GPUs) in particular seems to be flat in the last few years.

http://www.karlrupp.net/wp-content/uploads/2013/06/gflops-per-watt-...

As we have crammed more semiconductor components on to a wafer the connecting wires have gotten smaller and therefore have higher resistance. This means they generate more heat for the same data activity. This is also why CPU clock speed has not kept up with Moore's Law in recent years.

The brain's computing per watt is off the scale at maybe 10^16 GFLOPs per Watt vs 10 for a conventional CPU. but with very different tech. and it's hopeless at high speed serial computation.

https://aiimpacts.org/brain-performance-in-flops/
https://hypertextbook.com/facts/2001/JacquelineLing.shtml

We will need both a hardware and a programming paradigm shift to fix this IMO. Meanwhile they should skip the Orkney and head to Iceland for volcanoes and glaciers.

Reply Score: 2

RE: Performance per watt
by Megol on Mon 8th Oct 2018 06:56 UTC in reply to "Performance per watt"
Megol Member since:
2011-04-11

The issue is that performance per watt in CPUs (vs GPUs) in particular seems to be flat in the last few years.

http://www.karlrupp.net/wp-content/uploads/2013/06/gflops-per-watt-...

As we have crammed more semiconductor components on to a wafer the connecting wires have gotten smaller and therefore have higher resistance. This means they generate more heat for the same data activity. This is also why CPU clock speed has not kept up with Moore's Law in recent years.

Increased resistance in conductors contribute but is generally a speed limiter - increased RC delays means information travels slower.

The real problems here are (among others):
Increased leakage currents (e.g. by quantum tunneling between adjacent conductors).
Decreased doping efficiency (smaller amount of atoms per feature means hard to control random distribution).
Getting very close to minimum efficient operating voltages have almost removed that turning knob.

Things are still getting more efficient but other methods have to be used than in the past. Aggressive clock-gating - removing clock signals for inactive parts - removes the dynamic power while keeping the static (current leakage++). Aggressive power-gating - removing power from inactive parts - removes almost all static power too. Dynamic variable frequencies reduces dynamic power consumption in active but not critical hardware. Mix it up with more advanced methods like "turbo" that can improve efficiency by increasing frequency thus reducing running time of something critical, more advanced versions of this even calculates thermal budgets for a core and can clock down adjacent cores in order to boost the critical core to a higher frequency.

The easy tweaking knobs are gone but things are still improving.


The brain's computing per watt is off the scale at maybe 10^16 GFLOPs per Watt vs 10 for a conventional CPU. but with very different tech. and it's hopeless at high speed serial computation.

https://aiimpacts.org/brain-performance-in-flops/
https://hypertextbook.com/facts/2001/JacquelineLing.shtml

The brain is fuzzy and not comparable with a computer. One can make something close to a traditional computer but with higher efficiency by introducing imprecise calculations, one can even do that in some cases with a normal computer designed to allow less precision in some cases.

And don't forget this isn't really comparing a stored program computer with a brain - it's a comparison of a brain with a specific implementation of a stored program computer. There are other much more efficient alternatives like superconducting logic.


We will need both a hardware and a programming paradigm shift to fix this IMO. Meanwhile they should skip the Orkney and head to Iceland for volcanoes and glaciers.


People have wanted that paradigm shift since forever. The alternatives aren't working and in many cases are less efficient than the standard stored program computer/von Neumann design.

Reply Score: 3

The real question here...
by ahferroin7 on Mon 8th Oct 2018 12:19 UTC
ahferroin7
Member since:
2015-10-30

Is why is essentially nobody talking about trying to find ways to recapture this energy?

I mean, there are some companies that are actually using computers as heaters (Cray did this with their old showroom building, IBM offers it for some of their big systems via HVAC integration, and I'm pretty sure there's a company somewhere in Scandinavia that's paying people to use their computers as space heaters), but nobody is talking about the possibility of doing anything else with all this energy other than heating or finding a way to get rid of it. It would be interesting to see how much even a 10% efficient energy recapture mechanism might save a big datacenter in terms of power consumption.

Reply Score: 4

RE: The real question here...
by Lobotomik on Tue 9th Oct 2018 11:52 UTC in reply to "The real question here..."
Lobotomik Member since:
2006-01-03

It has to do with the laws or thermodynamics, and source/sink/output entropy. You need a large temperature differential between your energy source and sink to efficiently pump out low-entropy energy such as electricity. Warm water from cooling chips, likely below 80C, is far too cold for this. To put it in perspective, it is probably cooler than the leftover condensed steam from a turbine in a power plant.

Conceivably, you could drive a generator by boiling something like freon with the hot water and then move a generator with the expanding gas, then cool it with the cold water to condense it again, but the efficiency would be extremely low. You would produce large quantities of tepid water from cooling the freon and just a little electrical power as useful output.

However, for home heating, the output is high-entropy 20C air, and as you may intuitively know, the process in that case is quite efficient. Flowing 80C water through a radiator heats a home quite well.

Thermodynamics suck, but they are inescapable in this universe. Maybe in parallel universes, should they exist, they could be more favorable. Or not, I don't hope to ever understand.

Reply Score: 2