Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Bank scores with server virtualization

Gunjan Trivedi | Dec. 17, 2008
ICICI Bank's IT team, led by Vohra, has used virtualization to arrest an electronic infrastructure spill-over at its datacenters.

As the team scrambled forward with its server virtualization push, it had to pick its way through numerous technical challenges that surfaced. High CPU and memory utilization led to frequent performance degradation, which were in turn compounded by network bottlenecks. This resource issue was addressed by using dynamic memory and CPU allocation to avoid creating performance chokepoints. Patching and upgrading to higher versions were also undertaken to overcome various technical limitations.

"You run into a choke and after some analysis you realize that the internal disks are not good enough or you need a higher I/O bandwidth pipe. Or you might find that the machine is running out of memory for no logical reasoning. The physical machines you're virtualizing, may add up to only 32GB of RAM, while on a target machine you have 64. Since we were pioneers in implementing such a solution at this scale, there were no easy answers available. Not even with our solution providers. We understood the theoretical concepts well, but we became experts by living through all the live classrooms," he recalls.

The Smaller They Are, the Rarer They Fall

Today, ICICI Bank runs about 40 virtual machines on a server, with VMware virtualizing the environments of database server running SQL instances; application servers such as Websphere, Pramati and Oracle; and Web-servers. Vohra explains that as a strategy the current implementation has been executed only on 8-CPU dual core, 64GB RAM servers so that the features of over-commitment of memory and CPU resources are leveraged and VMware is able to scale up instead of scale out, taking full advantage of the Bank's licenses.

To decrease the use of multiple network cards, the servers have been moved to the same subnet of the NAS storage. This way, the same network card could be virtualized and deployed. This also ensures that connectivity to the storage through iSCSI is consistent and there are not too many hops.

"You can now over-commit resources. If I really needed 24 cores to do something spread across 30 applications I can now give them two cores each. That is a total is of 60 cores but physically I have only 24," says Vohra. The logic is that not all the applications peak at the same time. Some of these systems allow to over-commit resources beyond the boundaries of the physical box.

The required disk space on the home server has been provisioned on the connected iSCSI and Fiber Channel-based storage to meet the requirements of hosted VMs. I/O bottlenecks had been avoided by segregating storage connectivity on different network interfaces, says Vohra. This requires separate network cards for individual storage connectivity.

The virtualization effort forced various processes to be relooked and improved. It has translated into speedy provisioning that takes no more than two minutes of. This has directly reduced the average downtime of all the virtualized applications. Earlier, though the bank's IT team could provision five servers as standby for 30 servers it took three hours to bring up those server, in case of a failure. Each server had to be manually configured, loaded and restored. And if the incident occurred at 2AM, it could take as much as five hours to bring up.

 

Previous Page  1  2  3  4  5  6  Next Page 

Sign up for Computerworld eNewsletters.