Welcome!

Government Cloud Authors: Elizabeth White, Pat Romanski, Dana Gardner, Liz McMillan, Gopala Krishna Behara

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Autonomic Management Architectures for Cloud Platforms

Discussing differing approaches for managing cloud environments

The platform services segment of cloud is multi-faceted... to say the least. Lately, likely spurred on by announcements like IBM Workload Deployer and VMware Cloud Foundry, I have been thinking quite a bit about one of those facets: environment management. To be clear, I'm not talking about management tools for end-users, though that topic is worthy of many discussions. Rather, I'm talking about the autonomic management capabilities for deployed environments.

Put simply, I define autonomic management capabilities as anything that happens without the user having to explicitly tell the system to do it. The user may define policies or specify directives that shape the system's behavior, but when it comes time to actually take action, it happens as if it were magic. Cloud users, specifically platform services users, are steadily coming to expect a certain set of management actions, such as elasticity and health management, to be autonomic. Increasingly, we see platform service providers responding to these expectations to create more intelligent, self-aware platform management capabilities for the cloud.

Now, it may be tempting to say that users would not need to know much about the way autonomic management techniques work. Beyond knowing what capabilities their platform provider offers and how to take advantage of those, the end-user can be blissfully unaware. That's the point, right? I agree up to a point. The user probably does not need to know much about the algorithms and inner workings that carry out the autonomic actions. However, I do think the user should be aware of the basic architectural approach used to deliver this kind of functionality. After all, the architectural approach has the potential to impact costs (in terms of resources used), and it certainly will impact the way you begin debugging system failures.

When it comes to architectural approaches for providing these self-aware management capabilities, it seems to me that a few different philosophies prevail in the current state of the art. First off, there is the isolated management approach. In this case, there are separate processes, actually they are usually even separate virtual containers, that manage one or many deployed environments. The main benefit of this approach is that the containers running the application environment do not compete for resources with the processes managing that environment. The management processes are completely separate. They observe from afar and take action as necessary. Of course, there are drawbacks to this approach as well. Chief among them is the fact that the management and workload components scale separately. Presumably, as the workload components scale up the management components will have to scale up as well (surely not at a 1:1 ratio, but there will be an upper bound to what a single management process can manage). Additionally, one has to manage availability of both the management and workload components separately. All of these factors can increase resource usage and management overhead.

Another architectural approach in this arena is the self-sustaining approach. Here, the system embeds management capabilities and processes into each deployed environment. This removes the need for an external observer, means that management capabilities scale with your deployments, and eliminates the need to manage availability for separate components. These facts can contribute to reducing the overall management overhead. The main drawback in this case is the fact that the management processes can potentially compete with your application processes for resources. If the platform service solution you are using takes this approach to delivering management functionality, my advice is simple: do not assume anything. That means don't assume that management processes will adversely affect the performance of your application workload and don't assume they will not. Test, test, test, test!

As always, there is a middle ground, a hybrid approach if you will. In this case, every deployment creates a running application environment with some amount of embedded management capability. The embedded management components can take some actions on their own, but rely on an external component for other actions or raw information. This is actually quite popular in virtual environments since it can be tough for processes running in a guest virtual machine to get details about overall resource usage on the underlying physical host. Instead, processes running in the virtual machine call to an external component that has visibility of resource usage at the physical level. This approach affords a compromise between the higher management overhead of the first approach and the dueling processes of the second approach. That said, it comes with a certain amount of drawbacks from both approaches.

I am not necessarily advocating for one architectural approach over the other. I certainly think some are better approaches for certain scenarios, but I do not see a silver bullet answer here. I simply think that users should be aware of what their particular choice of a solution does in this respect and plan (and test!) accordingly.

More Stories By Dustin Amrhein

Dustin Amrhein joined IBM as a member of the development team for WebSphere Application Server. While in that position, he worked on the development of Web services infrastructure and Web services programming models. In his current role, Dustin is a technical specialist for cloud, mobile, and data grid technology in IBM's WebSphere portfolio. He blogs at http://dustinamrhein.ulitzer.com. You can follow him on Twitter at http://twitter.com/damrhein.

IoT & Smart Cities Stories
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...