The growth trajectory of the past seven years in high-tech, coupled with the ubiquity of creative cloud and COLO (co-location) solutions, has caused many companies to question and revisit their data center strategy.  Such was the case with a client who recently contacted us.  Their company had been growing rapidly for the past five years and there had not been an emphasis on cost or optimization.  Up to this point, their objective was to grow the top line revenue while putting the development of a long-term cost containment strategy on the backburner.  However, the landscape suddenly changed.  Growth is now projected to be lower and this has triggered new discussions on how they need to optimize their cost structure.

Since the start of 2016, we are seeing a significant rise in data center related inquiries with companies voicing precisely the same concern, suggesting that this is becoming an all too common scenario. These companies are now facing increased pressure to reduce COGS and OPEX associated with their computing infrastructure in order to help improve their margins. In fact, according to Wired Real Estate Group Inc., 90% of firms in the US have overprovisioned their cloud and data center infrastructure by an average of 50%.  This takes away from hard dollars that contribute to the bottom line.

In this newsletter, we will touch on several recommendations to rein in your data center costs:

Establish a baseline (what do you already have?)

Whether your goal is to drive down cost in your current environment or seek to move some or all of your computing workload to a COLO or Cloud Service Provider, we recommend performing a complete, detailed inventory audit of what you already have provisioned.  As Peter Drucker said, “You can’t manage what you can’t measure”.

Over time, unconstrained, hastily planned data center expansions create inefficiencies that increase cost. Underutilized servers are not only a waste of CAPEX, but they also drive up floor and rack space costs, which are billed whether you are fully utilizing it or not.  Licenses or hardware purchased – but not used – creates shelf-ware.  In our work, we have identified over 25 useful attributes to collect during the audit phase.  Part of our analysis includes a complete review of all of your vendor contracts.   Auto-renewal clauses, termination penalties, price increases, and non-optimal SLA’s are just a few examples of areas where significant value can be attained.

Analyze the results (what is the data telling you?)

Once all of the relevant data center attributes have been collected, you need a method for synthesizing and analyzing the data to determine the overall health of your environment. You may find that you have too much capacity in equipment, space, and power in a data center cage or location that was equipped to solve a specific problem in the past but your approach and/or your contract with your COLO provider were not updated to reflect your changing needs.   One client was able to realize a 40% reduction in the cost of their monthly COLO power bill alone by right-sizing their power to meet their current needs.

You may also have old systems that consume more space and power, and whose maintenance contracts get more expensive over time.  We helped a client eliminate over 1,000 servers that were vastly underutilized, resulting in millions of dollars of savings, and avoiding the cost to build a new data center.  The irony is data center management didn’t initially believe they had any excess capacity.  Another client was paying for “gold” support levels for all of their dev/test systems and paying premium dollars for SAN storage because they were charged for what was allocated as opposed to what was consumed.  As we worked with the vendor to gain visibility into actual storage usage, we discovered that our client was paying for more than double the storage capacity they were using.

Delays in equipment refresh, while a potential good short-term strategy to reduce CAPEX, can result in on-going increases in OPEX costs as maintenance, support, and the inherent inefficiencies of power and space on older gear work against you.

Develop a plan (where are we going, and what is the best way to get there?)

Creating detailed forecast scenarios based on assumptions around business growth is essential to creating an optimized infrastructure environment.  “What if” scenario questions are important to consider such as: How much new gear do we need to buy if our installed base continues to grow at the same pace?  How much can we re-deploy or virtualize vs. purchase new?  How do we prevent from being further over-provisioned if business slows down?  When do we need to start planning to be able to take advantage of our COLO renewal?  What are the cost implications if we move some or all of our computing workload to the Cloud?

The key is the development of a detailed plan, asset by asset. A partial list of key questions to consider is: (a) do we still need it? (b) are the latest security patches installed, and if not, can they be?  (c) if an asset is underutilized, can we virtualize it?  (d) are there any application implications to this server if we wish to make a change? (e) is the asset old enough that the best course of action is to decommission it?  (f) is the power in the various racks and cages right-sized for our current and future needs? (g) is the maintenance contract on a server appropriate for its usage or can it be downgraded with no loss in business function?  (h) are we paying for unused floor and rack space? (i) is this server/application a candidate for the cloud?  If so, are there network and system performance considerations that need to be taken into account if we move this asset?

Execute and negotiate the plan

As you evaluate various options, you have five approaches that you can consider: (1) on-premise, (2) COLO, (3) managed service, (4) Cloud, and (5) a hybrid solution.  Each of these solutions carries with it advantages and disadvantages.  In the case of on-premise solutions, which is a fully in-sourced model, technology refresh costs and forecasting future requirements becomes a challenge.  In a fully managed services model, obtaining full transparency to your service provider’s cost structure for hardware deployment, maintenance and support, and support staff becomes critical.  In the COLO business model, you will still be challenged with managing your own hardware but contract renewal terms are often onerous thereby limiting your options.  And finally, cloud solutions, while scalable and flexible, can lead to enormous risks ranging from complying with data privacy laws to cybersecurity to meeting SLAs, introducing you to challenges that you have never faced before.  There are also multiple flavors of cloud solutions for you to choose from.  One cloud vendor offers 39 different on-demand virtual server configurations, and over 500 variations of reserved computing instances.

Conclusion

Most data center ecosystems were built in a piece-meal way and are often misaligned with the current needs.  Regardless of the topology and architecture of your infrastructure, a “data center diet” to eliminate the bloat can result in significant cost savings.  Find out what you have, compare it to what you need, and develop a plan that matches fiscal prudence with technical requirements.

Symphony Consulting can help your infrastructure become lean based on our experience in IT operations management, strategic sourcing, and lean manufacturing. Contact us to learn more at info@symphonyconsult.com.