Server and storage utilisation

 General description

Fig. 4.5/1: Typical utilisation values for different types of systems
Fig. 4.5/1: Typical utilisation values for different types of systems
In general our experience shows that mainframe systems have a very good peak hour utilization. Unix systems are typically not as high utilized and Intel based systems are typically very poor utilized.
The peak-hour utilization is often used as base for the performance design of the system. The peak hour utilization is often only required for a short period of time and on the rest of the day/week the system is very poor utilized.
I want to highlight that Intel systems have often an utilization of 2 – 5 %. This means that most of the performance is unused all the time.

 General description

Fig. 4.5/2
Fig. 4.5/2
Explosive growth of storage?
Storage is as underutilized (average is 20 percent to 25 percent) as servers are and has long been characterized by monolithic architectures that are vertically integrated, meaning that once you select a particular array, you’re locked into that vendor’s management tools and advanced features such as copy services. In a heterogeneous environment, you end up with multiple administrative tools and consoles, increasing complexity. You also end up with “puddles of data” that are not connected, making it extremely difficult to share information because there are many copies of a single piece of data (many of which are back level). It is difficult to implement a rational ILM (information lifecycle management) strategy for compliance purposes and there are multiple access control (security) mechanisms.
By introducing virtualization, you can insulate applications and users from the underlying physical infrastructure. They can create a single logical pool of information—or multiple virtual storage pools based on business rules and data classification. Physical storage arrays—from multiple vendors and of multiple classes (enterprise versus midrange)—can be pooled together and a virtual storage pool can cross these physical boundaries.
Block virtualisation, allows to better utilize storage resources, is also used to improve availability and offers business continuity.
Fig.4.13 Storage virtualization
Fig.4.5/3: Storage virtualization
We can see clients improve utilization of their storage assets by 30 or 40 points, going from 20 percent utilization, for example, up to as much as 60 percent utilization or more). At the same time, a single management console can be used to manage the virtual and physical storage pools even if a client has arrays from multiple vendors.
So the combination of server, storage and network virtualization has dramatic effects on total cost of ownership, reducing equipment costs, software costs, administrative costs. Just as importantly, it enables access through shared infrastructure.
Fig.4.14 Storage virtualization
Fig.4.5/4: Storage virtualization
There is a perception that by introducing virtualisation you will negatively impact performance and security. Quite the opposite is true. All of our servers (z/i/p) come with virtualisation built in and ‘always on.’ Every performance benchmark set by System p (and it owns almost every benchmark) was set in a virtualized environment! From the storage perspective, IBM’s SVC owns the TPC benchmark record in both a scale out and scale up environment and many clients report performance improvement when moving to virtualized storage. By reducing the complexity of the infrastructure, reducing the number of systems and pooling information to eliminate the islands of computing and puddles of data, virtualisation can substantially improve security.