Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How to get self-service right in private clouds

Andrew Hillier is co-founder and CTO of CiRBA | Oct. 11, 2013
As organisations build out internal clouds, a major goal is to provide internal customers with self-service access to capacity. Although simple in concept, this fundamental change can create challenges in practice.

For dev/test workloads, a very dynamic model tends to be best, where users can access capacity rapidly and without much planning.  To use an analogy to hotels, this is similar to going on a road trip across the country, where the travelers simply stop at a roadside motel in whatever town they happen to be in, and no advanced reservations are required (or possible).  This usage model is typically the first one targeted by organizations building internal clouds, where self-service consoles enable immediate access to capacity with little or no planning.

For enterprise workloads the analogy is quite different, and it tends to resemble an important business trip, where planning is critical and hotels are reserved in advance to ensure there is a place to stay.  There is also a considerable amount of thought given to the amenities of the hotel, as more business-class features may be required than a simple roadside motel, such as network access, meeting rooms, printers, etc..  

This is where a different self-service model is needed.  Immediate access to capacity is far less important than reserving it in advance, and the requirements of the workloads must be assessed in detail against the capabilities of the hosting environments in order to ensure the workloads are routed to the right kind of capacity.  From a self-service perspective, this is more like an online hotel reservation system, where end users can enter their specific requirements and dates, determine which hotels are best, and book space.  In other words, self-service does not imply instant access, and internal clouds must also support proactive, detailed requests for more critical applications.

Given this, there are three simple questions IT should ask in order to ensure they are building enterprise-class internal clouds:

* Are my lines of business able to reserve capacity ahead of time for critical application deployments?
Enterprise consumers are very concerned with managing risk, and want guarantees that capacity will be available, even if they are early in the planning process.  Without the ability to reserve capacity, many will request VMs well in advance of the go live date, and simply sit on them.  These VMs become time bombs that can light up at any moment, frustrating infrastructure capacity management and, ironically, causing potential capacity shortfalls that defeat the purpose of being proactive in the first place.

Some cloud front-ends claim they can reserve capacity, but do this by drawing down on a pool of allocated resources, not by analyzing actual utilization of infrastructure.  This leads to a false sense of security, and can be more dangerous than not having reservations at all.

* Can operations groups scientifically route workloads to the right hosting environments?
There are many subtle and not-so-subtle requirements that must be met when hosting enterprise workloads.  Applications may require specific licensed software, storage tiering, compliance levels, data protection, backup and snapshotting, redundancy, jurisdictional requirements and other considerations.  To accommodate this, organizations may have dozens of hosting environments spanning different geographies, platforms, configurations and cost levels.  But the process of matching the two is still in the stone ages in many organizations, and often relies on spreadsheets and gut feel.  Moving to cloud operating models requires this process to automated and extremely accurate.  

 

Previous Page  1  2  3  Next Page 

Sign up for Computerworld eNewsletters.