Data Centre Managers
Virtualization has, of course, changed the way server capacity is utilized, but has plateaued in the face of the challenge of managing virtual environments that include compute, networking, storage and power. Instead of simplifying management, virtualization on the facility level is increasing complexity.
Data Centre Infrastructure Management (DCIM) should provide the visibility to address that complexity, but closed DCIM systems simply increase the size of the silo being managed rather than enabling true holistic management. Fortunately, DCIM platforms are increasingly using open APIs to facilitate integration with complementary software suites such as IT management and accounting. Through this integration, organizations can get the real-time visibility into resource utilization, available capacity, and costs required for informed decision making.
The final hurdle, hardware communication, is being addressed through the Redfish specification, now under the management of the Distributed Management Task Force (DMTF). The DMTF is an industry standards organization working to simplify the manageability of network-accessible technologies through open and collaborative efforts by leading technology companies, including HP, Dell, Intel, Emerson Network Power, Microsoft and VMware. Redfish is a common language for IT and infrastructure devices that will facilitate greater connectivity and communication across devices and systems without adding complexity.
Version 1.0 of the Redfish specification was released in August of 2015, and its adoption will be aided by the broad industry support of the DMTF. It will take a number of years for the specification to reach critical mass in terms of installed devices, but organizations can begin to capitalize on the value of Redfish and position themselves for true software-defined management through DCIM systems with open APIs supported by strategic use of Redfish translation engines. These Redfish translators will accelerate the industry’s ability to use the new specification to optimize operations.
With this foundation in place, an organization can create and maintain a map of data centre resources and their real-time operating parameters to achieve a new level of data-driven, real-time efficiency. IT and data centre staff can then focus all their energy on delivering what their customers (internal and external) need in the fastest, most efficient, and most secure way possible. The primary factors that currently consume them—geography, security, power, availability, and connectivity —become non-factors in the open, DCIM-enabled data centre.
For data centre managers who are wrestling with how to identify and decommission ghost servers, or are deploying cloud-based applications simply because they can’t mobilize their own resources fast enough to meet organizational requirements, all of this may sound like marketing hype. It’s not. The core technologies and specifications are now in place to make this vision a reality, and the market—those who rely on the applications and processing data centres deliver—will demand nothing less.
To find out more how we work with Enterprise IT partners to scale up our solutions please contact us on email@example.com or call +44 (0)20 7193 5708