As organisations build out internal clouds, a major goal is to provide internal customers with self-service access to capacity. Although simple in concept, this fundamental change can create challenges in practice. To explore these challenges and identify solutions, it is useful to frame the conversation in terms of supply and demand.
Internal customers, such as lines of business and application teams, create workload demands that must be met through the supply of appropriate compute, storage and network resources. In this sense, self-service models for accessing compute and storage capacity are poised to revolutionize the demand side of the equation just as virtualization revolutionized the supply side over the last decade. But this will bring challenges.
On the supply-side, it has taken a considerable amount of time for the management of back-end capacity to catch up to the new virtualization technologies. Although virtualization is not new, only now are more modern methods of managing and controlling capacity taking hold. By adopting more analytics-based approaches to matching supply and demand, where workload placements are the primary focus, organizations are becoming better and better at forecasting, efficiently procuring hardware, and eliminating the rampant over provisioning of the past.
Unfortunately, the demand side of the equation is about to go through similar gyrations as self-service models send organizations into unfamiliar territory. Although demand management has always taken a back seat to supply-side capacity management in IT organizations, there was always at least some level of control over inbound applications and user demands, if only as a by-product of the complex processes and lengthy procurement cycles that tended slow them down. But self-service eliminates this overhead, and although this is a very good thing overall, it also threatens to turn demand management into a wild west of unfettered end-user activity.
Understanding the true goals of self-service can help avoid this disruption. Self-service models should streamline demand management by disintermediating IT staff from the process of requesting capacity. But it should not be used as an excuse to bypass processes, controls and the rigorous planning that needs to go into deploying IT services.
Many view internal clouds, at least initially, as a sandbox for rapidly "spinning up" VMs, but the deployment of enterprise apps and critical business services require a lot of diligence and advanced planning, and these requirements don't simply evaporate when clouds are involved. Just because users can rapidly access compute and storage resources doesn't mean they can throw all caution to the wind.
So a bit of rethinking is needed in order to fully understand self-service and how it can be safely and effectively used. A good place to start is to categorize cloud use cases into the amount of rigor and planning that must accompany the workloads that are being deployed.
Sign up for Computerworld eNewsletters.