In an experiment that began in January, servers, networking gear and storage systems have been running in an simple shed without failure.
This experiment is giving David Filas, a data center engineer at the healthcare provider Trinity Health, the ammunition he needs to argue that IT equipment is a lot tougher than most think.
Through winter, spring and summer, these decommissioned systems keep running despite big variations in temperate and humidity. And the uptime of the systems has been better than what Google and Amazon have delivered so far this year.
Filas wants to convince IT administrators at his company, which runs 47 hospitals and other health care facilities, that it's OK to raise the temperature in data centers. But the IT staff has been reluctant to do so, he says.
The project was inspired by something Microsoft did a few years back. From November 2007 to June 2008, Microsoft employees ran five Hewlett-Packard servers in a tent and reported "zero failures or 100% uptime."
Filas is running his equipment in a generator shed at the healthcare firm's Novi, Michigan headquarters, a suburb of Detroit.
There is a block heater on the generator which can generate some warmth, but otherwise "it's more or less exposed to the same temperature and humidity conditions as the outdoors," said Filas, who presented his work at the Afcom data center conference last week in Orlando.
The temperature inside the shed ranges from 31 degrees Fahrenheit to nearly 105 degrees. The relative humidity has ranged from nearly 8% to about 83%. But the door of the shed has been accidentally left open a few times, once when the temperature reached 5 below zero.
Filas even tossed sawdust in the shed to make a point about the ability of these systems to handle dust. The dust issue pops up when arguments are made for using outside air to cool data centers, he said.
"I'm trying to dispel the myth that the data center has to be a clean room because it doesn't; today's electronics are extremely resilient," said Filas.
The equipment that is running in this experiment was pulled out of production three to four years ago. There are about a dozen pieces of equipment in this test, including HP servers, Cisco switches, and an IBM disk array.
The plan had been to keep these systems running until January, but Filas said he may extend that and add some workloads to the systems to address a criticism that it isn't a true test. He is considering networking the equipment and putting it under a heavy load.
Filas said he didn't expect the systems would fail, but says he is nonetheless surprised on how well the mechanical components have held up, the hard drives in particular.
Sign up for Computerworld eNewsletters.