The Wayback Machine - https://web.archive.org/web/20180130161737/http://hp.sys-con.com:80/node/3724423

Welcome!

Log Management Authors: Dana Gardner, Pat Romanski, Elizabeth White, David H Deans, Carmen Gonzalez

Related Topics: @DevOpsSummit, Log Management, @CloudExpo

@DevOpsSummit: Blog Post

HPE Tapping the Potential of DevOps By @Dana_Gardner | @DevOpsSummit #DevOps

How HPE’s internal DevOps paved the way for speed in global software delivery

The next BriefingsDirect DevOps innovator case study discussion explores how Hewlett Packard Enterprise’s (HPE's) internal engineering and IT organizations are exploiting the values and benefits of DevOps methods and practices.

To help us better understand how DevOps significantly aids in the task of product and technology development, please welcome James Chen, Vice President of Product Development and Engineering at HPE. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: First tell us a little bit about the scale of the organization. Clearly HPE is a technology company, has a very large internal IT organization, perhaps one of the largest among the global 2000.

Chen: We have a pretty sizeable IT organization, as you can imagine. We support all the HPE products and solutions serving our customers. We have about 8,000 to 9,000 employees, and we have a pretty large landscape of applications, something like 2,500 enterprise-scale large applications.

Chen

We also have a six data centers that host all the applications. So it's a pretty complicated infrastructure. DevOps means a lot to us because of the speed and agility that our customers are looking for, and that’s where we embarked on our journey to DevOps.

Gardner: Tell us about that journey. How long has it been? How did you get started? Maybe you can offer how you define DevOps, because it is a little bit of loose topic in how people understand it and define it.

Chen: We've been on the DevOps journey for the last couple of years. A certain part of the organization, the developer team, already practiced somehow, somewhere, in different aspects of DevOps. Someone was driving the complete automation of testing. Someone was doing a kind of Continuous Integration and Continuous Delivery (CICD), but it never came down to the scale that we believed would start impacting the overall enterprise application landscape.

Some months ago IT embarked on what we called a pilot program for DevOps. We wanted to be the ones doing DevOps in HPE, and the only way you can benefit from DevOps and understand DevOps -- the implications of DevOps on the IT organization -- is just go out and do it. So we picked some of the very complicated applications, believing that if we could do the pilot well, we would learn a lot as an organization, and it would be helpful to the future of the IT organization and deliver value to the business.

We also believed that our learning ad experiences could help HPE’s customers to be successful. We believe that every single IT shop is thinking about how they can go to DevOps and how they can increase speed and agility. But they all have to start somewhere. So our journey would be a good story to share with our peers.

Inception point

Gardner: Given that HPE has so many different products, hardware and software, what is that you did to find that right inception point. You have a very large inventory of potential places and ways that you could start your DevOps journey. What turned out to be a good place to start that then allowed you expand successfully?

Chen: We believed the easiest way was to start with some of the home-grown applications. We chose home-grown applications because it’s a little bit easier, simply because you don’t have the scale of vendor/ISP dependence to work with.

We decided to pick a handful of applications. Most of them are very complicated and some of them are very important. A good example is the OneNote application. This is the support automation application, which touches every device, every part that we ship to our customers. That application is essentially the collection point for performance data for all the devices in the customer data center, how we monitor them, and how we deal with them.

It’s what I consider a very important enterprise scale application and it’s mission critical. That was one of the criteria, pick an application that is really complicated and most likely home-grown. The other criteria was to pick an application that the application team itself already practiced, were ready to do something, and really wanted to embrace that new methodology and new idea.

The reason behind that is that we didn't want to set up a separate team to do DevOps to pair with the existing the developer team. Ideally, we wanted the existing developer team to go into that transformation. They became the transformation driver to take the old way to do DevOps into the new DevOps. So that was second criteria, the team, the people themselves had to be motivated and get ready for a change.

The third one was the application scale and impact. We understood the risk and we understood the implications. The better understanding you have, it's easy to get buy-in from your business partners and your executive team. That’s what we chose as the criteria as far as going into DevOps.

Gardner: I'm really curious. Given this super important application for HPE, how is performance measured and managed across all of these deployments, applying DevOps methodology, and getting that team buy-in? What did it earn you? What’s the payoff? What did you see that made DevOps worthwhile?

Chen: With DevOps we captured three dimensions. One is collaboration. What I mean by collaboration is taking operations into development and taking development into operations, so the operations and development teams are working side-by-side. That’s the new relationship of the collaboration.

The traditional way you did this was by the developer finishing the product and then throwing it over the wall to the operations guy. Then, when something goes wrong, we start freaking out, asking who owned the issue.

The new way is a very close collaboration between the development team and operations. From the get-go, when we start to design a product or software application, we already have people who are running the operation. They run the support in the team by understanding the risk and the implications for the operation. So that’s one dimension, the collaboration.

The second piece is about automation. You want to figure out a way that you can automate end-to-end. That’s very important. You asked a very good question about how to get buy-in from your business partners who ask, "I'm going to do CICD. What is the implication if something goes wrong?"

Powerful weapon

Automation has become a very powerful weapon, because when you can automate development, the deployment process becomes much easier to roll back when something goes wrong. Because that’s a small incremental change that you're making every time, the impact is much easier to understand. We believe the down time is much less than the normal way of doing the process. That’s the second dimension, automation.

The third one is codification. Codification is that everything is code. The old way was to define your infrastructure and have someone manually put all the infrastructure together to run an application. Those times are over.

Full DevOps is that you are able to drive a code that’s easy to configure, have your infrastructure provisioned based on that code, and get ready to run an application.

So DevOps consists of those three things. It’s truly important, the way we talk about it and the way we understand DevOps: collaboration, the codification, and automation.

Having said that, there are other implications about the organization and contingency. Those have a very profound impact on our IT organization. That’s where we understand DevOps and we're using that kind of methodology. Our thinking is to take it to the stakeholder and the customer, and show them the benefit that we're able to deliver for them. That’s the reason we get the buy-in support from the get-go.

Of course the quality, high availability, and agility have significantly improved. But I would really focus on speed.

Gardner: Is speed the number one reason to do this, or is it quality or security? What is the biggest reward when you do this well?

Chen: Speed is probably the number one reason to go to DevOps. Of course the quality, high availability, and agility have significantly improved. But I would really focus on speed, because if you ask any business owner, business partner, or your customer today, the number one challenge for them is speed.

Early in our conversation, I mentioned about automation. Traditionally we do a release every six months, because it's so complicated, as you can imagine. We have products from storage, network, server – hardware and software. If we make platform changes, in order to cover all those customers, devices, and products, it required pretty much six months to do.

Since you have the six month cycle, products issued to your customer before the next release will not have the best support on the host automation capability.

The performance of our service quality has a significant impact on customer satisfaction. Now we're talking about a release every two weeks. That’s a significant improvement, and you can see customers are happy because now with every product release, they have the automation capability within two weeks. You immediately have the best monitor and proactive care capability that we provide to our customers.

Bottom line

Gardner: I should think that that also has an impact on the bottom line, because you're able to bring new features and functions to the market, add more value to the products, and then charge more money for it. So, it allows you to get the value of your organization in to your bottom line although faster as well.

Chen: Yes. For example, we want to deliver any product or service that has a call-home capability, do the support automation, and proactively take care of them, within two weeks.  It's a huge advantage for us, because the competition typically take a few days to a couple of weeks just to install everything.

That two weeks is probably the best timing optimized for this kind of service scheme. Can we push this to one week or a few days? It's possible, but the return on investment may not be on day one.

For every application, when you make the call about DevOps, it’s not about wanting to do it as fast as possible. You want to examine your business case and determine, “what’s the sweet spot for us with DevOps?” In this particular case, we believe that looking at the customer feedback and business partners' feedback, two weeks is the right spot for us. That's significantly better than what we use to have, every six months.

You may also be interested in:

  • IoT plus big data analytics translate into better services management at Auckland Transport
  • Extreme Apps approach to analysis makes on-site retail experience king again
  • How New York Genome Center Manages the Massive Data Generated from DNA Sequencing
  • Microsoft sets stage for an automated hybrid cloud future with Azure Stack Technical Preview
  • The Open Group president, Steve Nunn, on the inaugural TOGAF User Group and new role of EA in business transformation
  • Redmonk analysts on best navigating the tricky path to DevOps adoption
  • DevOps by design--A practical guide to effectively ushering DevOps into any organization
  • Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership Model
  • HPE's composable infrastructure sets stage for hybrid market brokering role
  • Nottingham Trent University Elevates Big Data's role to Improving Student Retention in Higher Education
  • Forrester analyst Kurt Bittner on the inevitability of DevOps
  • Agile on fire: IT enters the new era of 'continuous' everything
  • More Stories By Dana Gardner

    At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

    @ThingsExpo Stories
    In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
    "There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    "Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
    Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L; stakeholders.
    Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
    "Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    "MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
    "IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
    SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
    "Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
    It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
    WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
    A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
    SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
    Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
    To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
    An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things’). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing? IoT is not about the devices, it’s about the data consumed and generated. The devices are tools, mechanisms, conduits. In his session at Internet of Things at Cloud Expo | DXWor...