I am a Principal Systems Engineer with VMware, and have been with VMware for nearly 8 years. Over this time I focused on customers in the Australia/New Zealand region, and my role has been largely to educate and inform customers on the evolution of traditional IT to IT as a Service. During the past couple of years, VMware has broadened its lens. We have perfected the operating model around compute, and we have a desire to apply this more broadly.
As a technologist when I first started to get visibility into network virtualization and storage virtualization I thought this was so very cool. But was this an incremental improvement for our customers that were already virtualized? After all, we have had virtual switches for years and we have had abstracted storage since 1.0 of ESX. Over the past 12 months I realized that this vision is much more disruptive and transformational than I had first thought. So let me explain. — Michael Francis
Summary
As the speed of business continues to increase it becomes increasingly important that organizations are ‘software driven’ to provide an agile capability. This agility will provide the means for organizations to embrace business moments.
Organizations can achieve greater agility at a rapid pace by replacing ‘hand tooled’ infrastructure changes with fully automated change enabled by the Software-Defined Data Center.
Some Assertions
Gartner has a concept called Business Moments. These are moments in time for an organization to use technology to disrupt a marketplace or use a technology to further amplify an existing disruption. The example they give is of last minute travel applications and the ability for that travel application to subscribe to the activity feeds of different forms of accommodations. For instance, instead of getting a list of options that simply comprises well know hotel chains with rates and availability, it may also give the availability and rates for holiday homes, or even bed and breakfast accommodations. So traditional hotel chains are now competing on the same playing field as holiday homes in that area.
The Business Moment was the injection of other forms of accommodation into the consumers view. The company that developed the software created a business moment, where they differentiated their application from others by bringing an alternative model into a market.
These Business Moments demand agility, an opportunity opens and closes and during that time brands can be altered, markets affected, and business models changed. The message from Gartner is that every company is a technology company, and companies that do not leverage technology to take create business moments are disadvantaged.
For many companies agility is hampered by the ability to embrace change. Highly human governed change is arguably mutually exclusive with agility.
Business Process Interfaces – An inhibitor to Agility
Business Process mapping and analysis has identified that 30% to 50% of employee activity is undocumented business process interface activities.[1]
Potentially 50% of human resource-driven operational costs could be contributed to the interfacing processes between business processes. These interfacing processes are the tribal knowledge provided to new employees by other employees that define the hand-over deliverables between business process and how that hand-over is performed.
These interfacing activities can actually be misaligned with the aims of the business, as the employee that created them does not have a full view of the role of the process in the larger business outcomes. These undocumented activities are applied with variable outcomes and consequential deliverables vary in consistency.
These interfacing activities have a direct impact on the ability to adopt change. Change is simply a sequence of business processes that modify another existing business process. If change is implemented using a process where up to 50% of the change activity could vary then the time to deliver change and the outcome of change will vary.
To address this variability IT organizations have adopted IT Service Management principles. This is aimed at governing the change process, but it does not necessarily attack or attempt to reduce the root cause for the variability of business process interface failure. Arguably the existence of Change Control boards is symptomatic of the existence of business process interfaces that have variability and prone to failure. Unfortunately many times the governance applied to change creates a significant overhead on the adoption of change.
When we consider that agility requires that change be consumed in a timely manner with consistent outcomes, I think we can see how agility can be hampered by change management.
Outcomes of Business Process Interface Failures
In my time in the IT industry I have lost count of the number of times the failure of process has caused an unexpected outcome. The stereotypical example is the introduction of a new service; this requires a set of changes to be applied across infrastructure; networking, storage, security services, monitoring, backup/recovery. The governance system drives the change through the environment with tickets raised and closed by specialists responsible for the different elements. However, human operation of a business process creates inconsistency and there’s the potential for an element of the change to be incompletely applied or applied incorrectly. For example, it could be that the backup service configuration for the new service is incorrectly configured resulting in potential for unexpected data loss in the case of service failure.
The overhead from governance of what should be repeatable change is not insignificant, and drives many man hours in reporting, meeting, testing, refining to support the required change. Gartner has identified that between 20 to 28% of projects that fail; fail due to late delivery.
Infrastructure Change – Repeatable Change
Infrastructure as defined in the Oxford Dictionary is “the basic physical and organizational structures and facilities (e.g., buildings, roads, and power supplies) needed for the operation of a society or enterprise.”
In the context of IT Services, infrastructure is the combination of commodity elements that supports the instantiation of business model automation systems. The infrastructure alone does not deliver business outcomes; it enables them.
A commodity is a well-understood basic element that has limited scope for change. For example Gold is a commodity; it can be melted down into different shapes and sizes and it can be combined with other elements, but the options for change to the commodity are limited. In IT, Storage could be considered a commodity because it provides persistence of information. It has a limited set of attributes that can be modified – availability, performance, recoverability, capacity and security. These same attributes can be applied to compute and networking elements. Differentiation between vendors comes in the form of software services across all of these elements.
Due to the limited change of function, the majority of change is repeated change. For instance, in networking, the modification of a business service may require the change to security posture of a service which requires a new VLAN to be created and a standard Access Control List applied. In many organizations today this involves significant human interaction, governed by an IT Service Management System to ensure change governance is in place. The actual change applied to the network will very likely involve the same process irrelevant of what the actual business service driving the change is. It will likely involve the same element management tools, and the only difference will be the values used in the process.
When we consider the operational processes involved in change of infrastructure broadly the vast majority are a repeat of exactly the same business process, just with different meta-data. We have vertical stacks of element managers and specialized skills, to perform the same task repeatedly.
In summary, the method of infrastructure change creates variability of outcome driven by a combination undocumented business process activities and human variability of process execution. The outcome is a requirement for human operated governance which is designed to reduce variability by implementing control/review points throughout a change process. This in turn creates human resource overhead and increases the time required to implement change. Infrastructure change by its nature is in the main repeatable change and as it does not directly apply business value has limited direct business value. So organizations expend disproportionate amounts of time and human resources attempting to implement and consequently govern change which is repeatable.
We use machines to perform highly repetitive tasks as they can perform the task faster and with less variation. Infrastructure change is an opportunity to replace the human resource with machine resources. The commoditization of change in the infrastructure layer eliminates/reduces the need for human oversight and ensures consistency of change. We have already seen examples of this. Prior to virtualisations’ existence in x86, if there was a need to move an application from one physical system to another it involved weeks of effort in planning, testing and implementing the change. Pending how well the human element did, the planning and testing the implementation either went well or failed. When virtualization of x86 appeared conservative organizations would follow a change control process for the use of vMotion to move an application from one physical system to another. Eventually the machine implemented vMotion process became so trusted and proven that any form of change control oversight was no longer required. This has evolved to the point where a machine decides whether an application should be moved to another physical system and it then implements the change.
Replacing Human Interfaces with Machine Driven Change
Now that we have discussed the reasons for reconsidering the way we implement change to infrastructure, lets now consider the how.
The How has been the issue and has been complicated by a number of factors:
- A vertically oriented infrastructure management architecture. Vertical management stacks for Storage, Computing, Networking, and Networking Services.
- No single common machine language that enabled broad cross management domain integration.
- High levels of variation of infrastructure offerings. Essentially varying combinations of common attributes of Storage, Networking and Compute (availability, performance, capacity, recoverability, security) that diluted standardization.
Organizations have attempted to address these with combinations of orchestration and scripting technologies to act as adapters to varying vendor vertical stacks with varying degrees of success. The challenge with this approach has been variation in programmatic interfaces between vendor vertical management stacks resulting in gaps of orchestration, as well as a need to update and maintain skillsets and instances of varying scripting languages as individual vendor element managers changed.
Pending the size of the orchestration gaps, these initiatives deliver varying value. Further, many of these initiatives were faced with orchestrating an environment where custom fit to solution need was more important than consistency of delivered service and consistency of delivery time which made the aim of broad orchestration more difficult to achieve.
Fundamentally attempting to integrate elements that were not designed or built to be integrated reduces the probability of achieving broad machine driven infrastructure change.
Step One to the How is a single software platform that spans across networking, storage and compute elements breaks down the vertical silos of management and introduces a horizontal layer that is tightly integrated from design. This approach is far more likely to succeed as the capabilities of the management interface we want to orchestrate does not vary between different vendor vertical stacks.
Once this software layer is available across all of the infrastructure elements other vendors of storage and networking and computing services only need to integrate with the one software layer. This software layer acts as the common machine language to enable one infrastructure element to communicate with another.
The common machine language is a set of common Application Programming Interfaces, and these are well-defined rules that define how multiple software components communicate.
The Software-Defined Data Center implements this one software fabric over the infrastructure hardware such that the capabilities that previously were delivered in a vendor specific way by the hardware and vertical element managers are now instantiated in one way by the software layer. The software fabric that is the software-defined data center is a composition of virtualization software and specialized vendor software that communicates using APIs to enable an event driven programmatic change to be implemented in storage, networking, computing, security controls etc.
Orchestration on its own is simply automating a process and it does not include intelligence to automate the instantiation of change based upon external influences.
Step Two is embedding a Policy Engine into the software-defined data center. For instance, going back to my example of automated vMotion of applications, the capability to automate moving an application or applications using vMotion between physical systems could be achieved with orchestration and scripting. However, the power in the capability is that the automation is started based upon a policy of system use.
This last point is a key one as it differentiates the software-defined data center from a broad common orchestration layer. The software-defined data center includes the Policy Engine that consumes information from sources plugged into the software-defined data center and decide if an orchestrated change is required due to external influences. For instance, the stereotypical automated scale out of an application due to increased load is made possible through the software-defined data center. The requirement to provision an additional application server requires appropriate storage to be assigned, appropriate storage services to be activated, network load balancing services to be modified, IP address systems to be updated, network access control lists to be modified, backup configurations to be modified and monitoring systems to accommodate the change.
Because of the unified visibility of this single software fabric across the infrastructure it is also in the right position to embed the Policy Engine to enable the software fabric to make decisions on changes without human intervention or oversight.
This software layer must break down the vertical management element manager silos, provide a common set of API services across storage, compute and networking to enable integration with specialist vendor capabilities. This decouples applications from the physical layer uniformly across infrastructure while reproducing the services expected from the infrastructure layer.
The software-defined data center is this programmable software fabric and its foundation is VMware virtualization – vSphere.
Eating our Own Dog Food – the outcome
VMware presented a session at VMworld 2013 that discussed our internal IT organization’s experience with the software-defined data center. I think the talk makes a few points very clear:
- You will spend time identifying the repeatable processes and interfacing activities and translating those into automated, machine driven processes. However, the time spent on doing this pales when compared against the time spent repeating that ineffective, inefficient manual process that previously existed.
- The adoption of this model of infrastructure changes will enable agility and a competitive edge.
Conclusion
Similarly to cloud computing, the software-defined data center is not about the technology itself; though it is super cool. It is about the transformation it makes to the IT organization and to enable the associated business to adapt to changes to its business models and its business services. It brings the efficiencies of commoditized Public Cloud operations into the enterprise data center. The agility it enables provides opportunity for the organization to embrace Business Moments.
Comments