October 21, 2014

Carousel Cuts Energy Use In Half With In-Row Cooling and Hot Aisle Containment

Bookmark and Share

Along with helping our customers with all things IT, here at Carousel Industries we are focused on staying on the cutting edge and creating best practices within our own physical infrastructure and data center. Lately much of that job falls upon Derek Herard, a convergence technician in our Rhode Island headquarters.

After 10 years as a field technician, about 2 years ago Derek started working inside Carousel’s offices, doing everything from building cubicles to running cabling and building out the demo room at headquarters. Along with the rest of the Carousel support group, he helps maintain facilities such as HVAC and power, and work on internal projects.

One Warm Server Room

One such project cropped up back in 2010, when it was getting mighty warm in the Carousel server room. “We were using the building HVAC system to cool the room, the rooftop unit that’s really made for providing comfort air for the offices,” Herard says. “In the server room it was on full blast all the time and it was never below 82 degrees.”

Even worse, the temperature would fluctuate dramatically, by about 10 degrees every minute. When the room warmed up, the HVAC would crank up, going full bore until the temperature hit what the thermostat was set at. But it would stay there only a short time before the heat coming from the servers cranked it right back up again.

That constant on/off cycling is of course not good for the health of the HVAC system – nor for the power bill.

In-Row Cooling: A Cure for Overheated Data Centers

Around that time Carousel had just learned about in-row cooling systems from its partner APC by Schneider Electric. As the name implies, such systems involve installing cooling units in the same rows of racks that house the IT equipment it is intended to cool.

“The cooling units are the same size as our data racks, taking up one 2-foot wide space in the rack,” Herard says. And they’re the same height as the server cabinets, about 7 feet tall.

The idea behind in-row cooling is to dramatically reduce the distance the cool air has to travel to reach the IT equipment it is intended to cool, thus making it far more efficient vs. having the air travel long distances through air ducts.

++++++++++++++++++++++++++++++++++++++++
Best Practices:  Download the free whitepaper, “6 Keys to Saving Energy in Your Data Center
++++++++++++++++++++++++++++++++++++++++

Data Center Temperature Fluctuations with InRow Cooling

Figure 1: Temperature Fluctuations with In-Row Cooling

And it works. “Now the temperature is sustained wherever we want it at all times,” Herard says. He notes the building HVAC system is no longer used to cool the server room at all. What’s more, the APC NetBotzremote monitoring system enables Herard to stay on top of what the temperature is and get an alert should it fall outside the prescribed range.

Figure 1 is a screen grab from the NetBotz system, showing the temperature in the enclosure at Rhode Island headquarters consistently hovering right around 75 degrees. Compare that to Figure 2, which shows a smaller server room at a Carousel site in Conn. that doesn’t yet have in-row cooling: it clearly shows the rapid 10-degree swings that used to plague the Rhode Island site.

Carousel actually has two in-row cooling systems, one for each of its two rows of racks. One of the systems, the APC ACRP101 DX, monitors temperatures and can control humidity as well. The other, the APC ACRD501 DX, has the same cooling capacity but no humidity control, since the other unit is sufficient to control humidity in the room.

Temperature Fluctuations Data Center Without InRow Cooling

Figure 2: Temperature Fluctuations No In-Row Cooling

Hot Aisle Containment Adds Further Data Center Efficiency

Not quite a year later, Carousel added a hot aisle containment system to make the setup even more efficient. It’s a standard data center best practice to configure rows such that the front of servers face each other, as do the rear. In a data center with multiple rows of racks, that creates one aisle  - the “cool aisle” – where cool air can be pumped into two rows of servers at once. Similarly, the exhaust coming from two adjacent rows comes out the rear, creating a “hot aisle.”

A hot aisle containment system is basically a large metal hood that covers the hot aisle and captures the air so it doesn’t circulate back into the cool aisle. Rather, it gets pumped directly back into the cooling system. While it seems counterintuitive, it actually takes less energy to cool that warm air than it does to cool fresh, outside air.

From a cooling perspective, the hot aisle containment system basically cut the size of the server room in half, Herard says. “We’re only cooling the air that needs to be cooled for the servers, the intake air,” he notes.

And it shows in the energy bill, which Herard says has been cut in half since installation of the hot-aisle containment system.

Carousel has also replicated this approach in their on-demand labs data center.  “The amount of equipment in the lab is constantly fluctuating up and down depending on the types of projects we are working on,” say Herard.  “Utilizing the hot aisle containment and in-row cooling best practice in our labs has resulted in the same types of savings we’ve seen in our production data center.”

If cutting your data center energy bill sounds like a good idea, contact Carousel - we can help you identify ways to make it happen. And don’t forget to download the free whitepaper, 6 Keys to Saving Energy in Your Data Center.”

3 Tips for Dealing with Increasing Data Center Density

Bookmark and Share

As companies continue to virtualize every server they can find, and replace older models with high-powered blade servers, they may find they are stretching the limits of their data center infrastructure.

As IT infrastructure within data centers grows every more dense, companies need to do three things: “Plan, plan, plan.”  So says Jonathan Caserta, Physical Infrastructure Solutions Architect with Carousel Industries. “Plan years down the road if you can,” he says, especially with respect to power capacity and cooling requirements.

Providing Proper Power to a High-Density Data Center

With respect to power, one big issue is staying far enough ahead of demand so that you don’t find yourself unable to supply power to a new bunch of servers or an entire rack, Caserta says.

The standard industry default configuration for data center rack density is 3kW per rack, mainly because the maximum power you can squeeze out of a power distribution unit tied to a 30-amp, 120-volt circuit is 2.88kW.  As you move to higher density racks, you may need to make the jump from 120-volt circuits to 208 volts, which will give you 4.9 kW/rack.

“If you’re moving to toward high-density SANs and lots of virtualization, I’d definitely recommend looking at 208,” he says. In fact, as rack density creeps above 2 kW, it’s time to consider 208-volt circuits.  Keep in mind that you’ll likely need an electrician to install the new circuits, so it’s not something that you can do overnight – which is where the planning comes in.

++++++++++++++++++++++++++++++++++++++++++++
Dig Deeper:  Download the Free Whitepaper:  6 Keys to Energy Savings in Your Data Center
++++++++++++++++++++++++++++++++++++++++++++

Assessing Cooling Options for High Density Data Centers

As your data center draws more power, you will also need to increase cooling capacity, Caserta notes – essentially kilowatt for kilowatt, at a minimum. But simply increasing the capacity of a room air conditioner is not likely the most efficient way to handle a high-density data center. (Listen to this podcast with Jaime Davis, the Director of Physical Infrastructure at Carousel to learn about modeling your data center cooling environment with Computational Fluid Dynamics)

A better approach is to use cooling systems that direct cool air more closely at the load that needs cooling.  At the high end are rear door heat exchangers (RDHx), which use chilled water fed through coils to cool high-density racks.  Other so-called “close-coupled” cooling systems likewise seek to position the cool airflow closer to the IT load that needs cooling, as opposed to room-based systems that simply cool the entire room.  Examples include the CyberRow system from STULZ and the InRow Chilled Water system from APC by Schneider Electric.

Containment Systems Increase Data Center Cooling Efficiency

To help increase the efficiency of your cooling system, consider air containment systems. Most data centers today are configured in a hot aisle/cool aisle configuration, where the front rows of the data center racks face each other, as do the back rows. Cool air is taken in through the front (the cool aisle) while hot air is expelled out the back (the hot aisle).

Containment systems are intended to keep the hot and cool air from mixing together by trapping one or the other. Hot air systems contain the hot air while cool air containment systems do the same for cool air.

“Containment works really well in high-density applications because cooling units work better with hotter return air,” Caserta says. “It might seem counter-intuitive but it’s true. If you can increase the temperature of the return air, the cooling system works more efficiently.”

The key to each of these tips is planning ahead, so that you don’t run out of power or cooling capacity as you’re expanding your data center to meet business requirements. If you need guidance with your own plan, contact the experts at Carousel for help.

Advice for a Successful Unified Communications Implementation

Bookmark and Share

If you missed the Enterprise Connect event at the end of March, you can still catch up on at least one presentation, from Ed Wadbrook, vice president of applications and collaborative solutions at Carousel Industries. Carousel has posted a video with Ed delivering an early version of his presentation, titled Technology in Search of a Customer: Critical Factors to Consider for Your UC Strategy.

As we discussed in our preview of the presentation, Wadbrook cautions against adopting any unified communications technology for its own sake. You first need to ensure that it meets your business objectives and requirements.

The promise of UC pivots on improving how individuals, groups and communities perform and interact across multi-vendor environments, Wadbrook says. UC becomes the fabric that enables business process and workflows to integrate multiple forms of communications into common user, administrator and developer experiences.

Know the Business, Understand the Requirements

When Carousel arrives on the scene at a customer site, it doesn’t ask what technology the company wants; rather, it talks about the business objectives the company is trying to reach. What follows is a discussion of existing business model, processes and challenges, and a hard look at the various groups that are charged with delivering on the business objectives and how they are set up. Then you’ve got to consider your existing infrastructure and systems, including whether bring your own device is a consideration (even if IT doesn’t want it to be). Finally, you define key performance indicators and success metrics, Wadbrook says.

Wadbrook gives the example of an insurance company that wants a tablet-based application that will enable its 100 wealth management experts to present in front of customers. Delivering such an application requires a series of steps, including procuring the devices and registering them with the corporate directory. They also need to be brought into compliance with corporate policy with respect to virus and malware protection, firewalls and the like.

Next is a discussion with the end users around what kind of applications they want and need. That will likely entail some sort of video application, and one that needs to work not only point-to-point with other tablets, but with the company’s room-based and desktop systems. Another issue is where in the world the applications will be used and how employees plan to connect.

“Now you start to see the physical demand on the infrastructure,” Wadbrook says. Say there’s an internal conference at the company and 100 of these tablet-toting execs show up, expecting to conduct videoconferences over the corporate network. The group may well overwhelm the wireless network and adversely affect performance for all other employees.

Implementing UC Requires a Solutions-based Approach

The point is, companies need to have a full understanding of how UC technology will be used in their company before they even think about which technologies or products they will use. That’s because UC touches so many applications, people and issues; if you take a product-specific approach, you’re likely to miss many of them.

The insurer that wanted to implement tablets is a case in point, Wadbrook says. “That $499 device could require thousands and thousands and thousands of dollars of infrastructure support,” he says.

To download the PowerPoint deck from the presentation, click here.

Carousel Partner APC Settles Debate Over Hot-aisle/Cold-aisle Containment

Bookmark and Share

A debate has raged in data center circles for years now over whether hot aisle or cold aisle containment systems are the best bet in terms of improving data center cooling efficiency.  (For proof, see this 2008 article from SearchDataCenter.com.)

Well our friends at APC by Schneider Electric, a Carousel Industries partner, have come up with a definitiveHot aisle cool aisle server room containment answer, as outlined in this blog post:

Here at APC Schneider Electric we did some analysis and found that, while both strategies offer energy savings, hot-aisle containment can provide 40% more savings. And while it can be difficult to retrofit an existing data center to support hot-aisle containment, making cold-aisle the only option, we can definitively say that hot-aisle containment should always be used for new data centers.

Defining hot aisle/cold aisle containment systems

Hot and cold aisle containment systems both stem from the common practice of hot aisle/cold aisle data center configuration, where each row of racks is positioned with their fronts facing the fronts of an adjacent row. That means the rear of the racks, where hot air exhaust is expelled, also face each other, creating a “hot aisle.” The aisle between the rack fronts is the “cold aisle.”

Over time, data center operators found the system wasn’t quite working as well as they’d hoped. As noted in the SearchDataCenter.com article:

Hot aisle/cold aisle “looks neat in drawings, but in practice it’s unmanageable,” said Mukesh Khattar, the energy director at Oracle Corp., in a presentation at the SVLG event. “Up to 40% of the air doesn’t do any work. Hot air goes over the tops of racks and around the rows.”

Containment systems were developed to keep the hot and cold air from mixing with each other. Some systems capture the hot air, others the cold – hence the debate over which works better.

Hot-aisle containment consumes 40% less cooling power

The APC by Schneider Electric paper makes a pretty convincing case that the hot-air containment systems (HACS) work better than their cold-air counterparts, known as CACSs.
The reason has to do with the “economizer” mode that is now common in air cooling systems. When outside temperatures allow, the systems use cool outside air to help keep the data center cool, easing the burden on air conditioning compressors and saving energy.  As the APC by Schneider Electric blog states:

While both the CACS and HACS will save you money, our analysis shows that, at the same 75°F/24°C work environment, the HACS consumes 40% less cooling system energy than the CACS. The majority of these savings are attributed to the economizer hours…From our analysis, it’s clear that under practical work environment temperature constraints and temperate climates, hot-aisle containment provides significantly more economizer hours and lower PUE compared to cold-aisle containment. This is true regardless of the cooling architecture or heat rejection method used.

So thanks to our partners at APC by Schneider Electric for settling that score. If you need help with data center design, contact Carousel and we’ll put not only our best Physical Infrastructure team members on it, but we’ll bring to bear the smarts from our industry leading partners, too. And if you’d like to read the full white paper with complete HACS/CACS analysis, you’ll find it here.

Storage News Roundup: Storage is Getting Sexy

Bookmark and Share

Go ahead, admit it. You never used to much care about storage. It was mundane, stodgy, boring technology compared to the wild world of networks, all abuzz with fiber optics, and blazing fast computers. It’s easy to have a new-found appreciation for storage given all the stuff we now have to store – even at home we’ve got terabyte external disks to back up all the music, videos and photos. And the types of storage solutions being called for in the enterprise environment – and that the market is delivering – definitely demand that we sit up and take notice. Consider these tidbits:

IBM Breaks New Ground with Flash Memory Storage

First, from Network World:

With an eye toward helping tomorrow’s data-deluged organizations, IBMresearchers have created a super-fast storage system capable of scanning in 10 billion files in 43 minutes.

This system handily bested their previous system, demonstrated at Supercomputing 2007, which scanned 1 billion files in three hours.

Key to the increased performance was the use of speedy flash memory to store the metadata that the storage system uses to locate requested information. Traditionally, metadata repositories reside on disk, access to which slows operations.

The story goes on to say the system can read files at a rate of almost 5Gigabytes per second. So of course you’re probably thinking, “how long it would take to read my iTunes library”. If your library is 52G, it comes to a little more than 10 seconds. Think back to the last time you transfered that beast, and the process was most likely measured in hours. So put us down as “for” flash memory.

Automated Tiered Storage, Meet Solid State Drives

Here’s another storage trend that can really help companies: automated tiered storage. Tiered storage is nothing new – you try to keep the data you need most on the fastest (and most expensive) storage tier while relegating the rest to lower-cost platforms. The trick has always been figuring out which data is used most often and taking the time to shift things around accordingly. That’s where automated tiered storage comes in. As this piece from Tech Target says:

Like caching, automated tiered storage improves data storage system performance as much as it attacks the cost of capacity. By moving “hot” data to faster storage devices (10K or 15K rpm disks or SSD), tiered storage systems can perform faster than similar devices without the expense of widely deploying these faster devices. Conversely, automated tiering can be more energy- and space-efficient because it moves “bulk” data to slower but larger-capacity drives.

SSD stands for solid state drives, which are also all the rage, with companies like eBay installing it in a big way, according to StorageBytes Now:

Solid state storage system manufacturer Nimbus announced Tuesday a deployment of more than 100 TB of solid state storage at eBay. The installation uses that latest version of the Nimbus Sustainable Storage system that provides a very close integration with VMware and provides 10Gb/sec. iSCSI Ethernet connectivity and reduces VM provisioning time from 40 minutes to just three minutes

There’s Big Money in Virtualization-optimized Storage

Which brings us to yet another storage trend: virtualization-optimized storage. Virtualization apparently can wreak havoc with storage systems, given all the storage I/O that can develop when you’ve got lots of virtual machines sharing the same physical server. As TechTarget reports, help is on the way:

Virtual server environments are an opportunity for innovation and new ideas, and startups are jumping into the fray. One such company, Tintri Inc., has developed a “VM-aware” storage system that combines SATA HDDs, NAND flash and inline data deduplication to meet the performance and flexibility needs of virtual servers. “Traditional storage systems manage LUNs, volumes or tiers, which have no intrinsic meaning for VMs,” said Tintri CEO Kieran Harty. “Tintri VMstore is managed in terms of VMs and virtual disks, and we were built from scratch to meet the demands of a VM environment.”

It seems the virtualization-optimized storage vendors are on to something, at least judging by the fate of one startup called IOTurbine. As StorageBytes Now reported last week:

The surprise of the day was the announcement that Fusion-io is acquiring VMware storage virtualization and caching software supplier IOTurbine, just weeks after coming out of stealth, for $95 million in cash and stock.

Acquired for $95 million just a few weeks after hitting the market. That is by no means mundane, stodgy or boring.

To learn more about how to take full advantage of modern storage trends and capabilities in your environment, contact Carousel today to arrange an assessment with one of our industry-leading storage engineers.

Summertime tips for keeping your data center cool

Bookmark and Share

Ah, summertime. The kids are out of school, we take some much-needed vacation and enjoy the nice, warm weather – while hoping the data center doesn’t overheat.

When the temperatures hit the 90s, we get lots of calls into our Physical Infrastructure team about data center cooling systems that don’t seem to be doing the job. The problem usually comes down to one of two fundamental issues: a lack of maintenance or a cooling system that’s underpowered for the load it’s supposed to handle.

Maintenance is key to proper data center cooling

Just like a car needs periodic oil changes and tune-ups, your computer room air conditioning(CRAC) units need some attention, too. As seasons change it’s easy for pollen and other contaminates to clog your outdoor heat exchanger and it can lead to unexpected downtime. Filters on the indoor units get clogged over time and if not replaced they can lead to efficiency and other issues. At the very least, your CRAC units should be on a semi-annual maintenance schedule but some firms perform maintenance as much as quarterly.

The same can be said for for UPS systems. “Batteries tend to fail more when temperatures reach their extremes, which seems to happen more frequently lately,” says Jonathan Caserta, a Mechanical Engineer with Carousel Industries Physical Infrastructure division. “The magic number for batteries is 77 degrees F. If it goes up or down by 10 degrees from there you start to adversely affect your efficiency, capacity and the life of the batteries.”

Measure load to ensure proper data center cooling capacity

If your CRAC units are up to snuff but the data center is still too warm, you may simply have too much load. “One hundred percent of all power consumed in a data center is spit out as heat, because energy can’t be destroyed” Caserta says. A relatively simple way to figure out how much heat needs to be dissipated is to determine how much power your various data center equipment consumes. For every kilowatt-worth of equipment on a server rack, for example, you’ll need the same amount of cooling capacity, again measured in kilowatts.

But don’t stop at just the servers; all your other electric equipment gives off heat, too – including UPSs, power distribution units and all other IT equipment such as networking hardware.

Examine the data center for cooling leaks and heat sources

In addition to equipment, you may have other sources heating up your data center. Say your data center shares a wall with a warehouse next door. Chances are the warehouse isn’t climate-controlled so on a 95-degree day it maybe close to that in the warehouse. If the walls are not insulated some of that heat will seep through the wall into your data center.

People also give off heat – approximately 100 watts per person, Caserta says – so factor in how many staff are in the data center at any given time.

If your data center has a dropped ceiling, look to see whether it’s sealed or if there’s a vapor barrier containing the space above it. If not, chances are humidity is leaking into the room from above.

Also make sure your data center isn’t connected to the building’s HVAC system, since the supply and return vents will provide another place for humidity to infiltrate.

For more details on estimating your data center’s heat load check out the white paper, “Calculating Total Cooling Requirements for Data Centers,” from our partner APC (PDF).

Carousel Industries is an Elite Partner of APC’s certified in Data Centers as well as a Certified Service Sales Partner.

Don’t Take Voice For Granted

Bookmark and Share

One topic that was conspicuous in its lack of attention and focus at Enterprise Connect was voice communications. “They are all taking voice for granted because it is not sexy anymore, which can be a mistake” says Bob Harkins, VP of Carousel Industries. ” When you look at the standard Unified Communications portfolio applications such asweb/audio/video collaboration tools, the ability to escalate and change communication modes on the fly (IM to voice/video conference, voice to IM etc) , even business process integration, the fact is voice communications remains a vital core element. There is a 100% expectation that when you establish voice communications, regardless of endpoint type, there is going to be a dial tone and when you connect with someone, the quality is going to be perfect.”

That is not necessarily the case in today’s environment unless you plan appropriately. Harkins continues, “When you layer video, cloud computing, VOIP and general web traffic on top of these networks and you are asking all of these layers to work flawlessly, often in conjunction with one another, quality can suffer. Voice is still a killer app, and it is the one solution for which business people have a zero tolerance policy. It has to work 100% of the time.”

While the mechanics of ensuring this are well known, execution and planning are everything. Pervasive video adds another wrinkle to the quality of service discussion. So what can businesses do to make sure voice doesn’t get left behind as the network resources get stretched. Harkin’s suggests:

* Review Quality of Service Across Applications. When VOIP was the new player in town, businesses prioritized voice packets to ensure high quality. As network traffic increases and video communications and cloud computing take hold, it is time to review your protocols and make sure voice still has top billing.

* Stress Your Network. As you deploy more robust UC environments and your organization continues to adopt more video and media collaboration tools, you should put your network through a stress test. Pushing your network hard in off hours will provide insights into its capabilities and areas of concern.

* Analyze and Act On The Results. Unified Communications is all about providing the tools so employees can be optimally e ffective. Knowing your networks limits will allow you to be proactive in extending capability and ensuring that your employees don’t think their video conference is an old dubbed Godzilla movie. Once that happens, you’ve lost them.

To discuss assesssing and stress testing your network with Carousel’s experts, Contact Us today.