Hybrid Clouds: No Easy Concoction
Every data center provisions its workloads for a worst-case scenario. IT managers put an application on a server with extra memory, CPU, and storage to make sure the application can meet its heaviest workload of the month, quarter, or year and grow with the business. This approach is so deeply ingrained in IT that, prior to virtualization, applications typically used 15% or less of available CPU and other resources. Storage might reach 30% utilization. Energy was cheap, spinning disks were desirable, and abundant CPU cycles were always kept close at hand.
In today’s economic climate, such compulsive overprovisioning and inefficiency are no longer acceptable. What if, instead, applications throughout the data center could run at closer to 90% utilization, with the workload spikes sent to cloud service providers (a process called “cloudbursting”)? What if 85% of data center space and capital expenses could be recouped, with a small portion of that savings allocated for the expense of sending those bursts of computing to the public cloud?
Read more: at Information WeekShare this Article: