Other lessons he learned include examining service-level agreements (SLAs) carefully, because he finds the ones he's run into don't actually agree to much. "You can have a big outage and it's not far off the SLA," he says. If a provider offers 99% uptime, that equates to 7-1/2 hours per month of down time. "That's a day," he says.
Regarding security, overall he says suspicious because he doesn't really get to examine it. "I think it's inherently insecure because I don't control it," he says.
Providers say, for example, that they are SAS 70 compliant in network defenses, but he worries about threats from employees of the provider. "Just like everyone else, their biggest threat is internal," he says.
Until reliable cloud security standards are established, he would avoid putting critical applications there unless he got to examine the provider's security. "I would pretty much have to know everything about what they do," he says.
Even then there are uncertainties. For instance, if data is housed in a particular data center, but the provider expands or data is replicated to another data center in the cloud provider's network, how will he know the second site is as secure?
Tier 3, the provider he used for a SQL virtual deployment, was good about explaining and documenting its security, he says, but still it wanted customers to take some responsibility. "Their stance was you need to take measures yourself," he says.
He says the IT department tries to be as flexible as possible to support projects, but the reality is that the costs of cloud services are difficult to project accurately. "It's really an unknown," he says. "If you use it for six months and it costs the same as buying physical hardware, then you have to switch."
Sign up for Computerworld eNewsletters.