Welcome!

Government Cloud Authors: Kevin Jackson, Pat Romanski, Elizabeth White, Liz McMillan, Bob Gourley

Related Topics: @CloudExpo, Open Source Cloud, Containers Expo Blog, Agile Computing, Government Cloud

@CloudExpo: Article

Software-defined Datacenters Benefit Enterprises: Herrod

VMware CTO Steve Herrod on how the software-defined datacenter benefits enterprises

In advance of the VMworld conference in San Francisco, I recently sat down with Steve Herrod, Chief Technology Officer and Senior Vice President of Research & Development at VMware.

Our discussion hinges on the intriguing concept of the software-defined datacenter. We look at how some of the most important attributes of datacenter capabilities and performance are now squarely under the domain of software enablement.

A top technology leader at VMware, Herrod has championed this vision of the software-defined datacenter and how the next generation of foundational IT innovation is largely being implemented above the hardware.


VMware CTO Steve Herrod keynoting at 1st Cloud Expo in Silicon Valley in 2008

For example, those who are now building and managing datacenters are gaining heightened productivity, delivering far better performance, and enjoying greater ease in operations and management -- all thanks to innovations at the software-infrastructure level.

Join the discussion here and further explore how advances in datacenter technologies and architecture are -- to an unprecedented extent -- being driven primarily through software. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: We've heard a lot over the decades about improving IT capabilities and infrastructure management, but it seems that many times we peel back a layer of complexity and we get some benefits, and we find ourselves like the proverbial onion, back at yet another layer of complexity.

Complexity seems to be a recurring inhibitor. I wonder if this time we're actually at a point where something is significantly different. Are we really gaining ground against complexity at this point?

Herrod: It’s a great question, because complexity is associated with IT and why we'll do it differently this time. I see two things happening right now that give us a great shot at this.

One is purely on expectations. All of the opportunities we have as consumers to work with cloud computing models have opened up our imagination as to what we should expect out of IT and computing datacenters, where we can sign up for things immediately, get things when we want them, and pay for what we use. All those great concepts have set our expectations differently.

A good shot

Simultaneously, a lot of changes on the technology side give us a good shot at implementing it. When you combine technology that we'll talk about with the loosened-up imagination on what can be, we're in a great spot to deliver the software-defined datacenter.

Gardner: You mentioned cloud and this notion that it’s a liberating influence. Is this coming from the technologists or from the business side? Is there a commingling on that concept quite yet?

Herrod: It’s funny. I see it coming from the business side, which is the expectation of an individual business unit launching a product. They now have alternatives to their own IT department. They could go sign up for some sort of compute service or software-as-a-service (SaaS) application. They have choices and alternatives to circumvent IT. That's an option they didn't have in the past.

Fundamentally, it comes down to each of us as individuals and our expectations. People are listening to this podcast when they want to, quickly downloading it. This also applies to signing up for email, watching movies, and buying an app on an app store. It's just expected now that you can do things far more agilely, far more quickly than you could in the past, and that's really the big difference.

Gardner: Tech users are getting higher expectations based on what they encounter on their consumer side of technology consumption. We see what the datacenters are capable of from the likes of Google and Facebook. Is it possible for enterprises to also project that sort of productivity and performance onto what they're doing, and maybe now that we've gone through an iteration of these vast datacenters, to do it even better?

Herrod: I have a lot of friends at Facebook, Zynga, and Google, running the datacenters there, and what’s exciting for me is that they have built a fully software-defined datacenter. They're doing a lot of the things we are talking about here. But there are two unique things about their datacenters.

When you go into the business world, they don't have legions of people to run the infrastructure.



One is that they have hundreds or even thousands of PhDs who are running this infrastructure. Second, they're running it for a very specific type of application. To run on the Google datacenter, you write your applications a very specific way, which is great for them. But when you go into the business world, they don't have legions of people to run the infrastructure, and they also have a broad set of applications that they can’t possibly consider rewriting.

So in many ways, I see what we're doing is taking the lesson learned in those software-defined datacenters, but bringing it to the masses, and bringing it to companies to run all of their applications and without all of the people cost that they might need otherwise.

Gardner: Let’s step back for some context. How did we get here? It seems that hardware has been sort of the cutting edge of productivity, when we think of Moore’s Law and we look at the way that storage, networks, and server architecture have come together to give us the speeds and feeds that have led to a lot of what we take for granted now. Let’s go through that a little bit and think about why we're at a point where that might not be the case anymore.

Herrod: I like to look at how we got to where we are. I think that's the key to understanding where we're likely to go from here.

History of IT decisions

W
e started VMware out of a university, where we could take the time to study history and look at what had happened. I liked looking at existing datacenters. You can look through the datacenter and see the history of IT decisions of the past.

It's traditionally been the case that a particular new need led the IT department to go out and buy the right infrastructure for that new need, whether it’s batch processing, client/server applications, or big web farms. But these individually made decisions ended up creating the silos that we all know about that exist all over datacenters.

They now have the group that manages the mainframe, the UNIX administration group, and the client PC group, and none of them is using common people or common tools as much as they certainly would like to. How we got to where we are were isolated decisions for the right thing at the right time, without recognizing the opportunity to optimize across a broader set of the datacenter.

The whole concept of software-defined datacenters is looking holistically at all of the different resources you have and making them equally accessible to a lot of different application types.

Gardner: Earlier, I used the metaphor of an onion. You peel back complexity and you get more. But when it comes to the architecture of datacenters, it seems that the right comparison might be a snowball, which is layered on another layer, or it has been rolling and gathering as it goes, but not rationalized, not looked at holistically.

Every single day you hear about a new case where a business unit or an employee is able to circumvent IT to scratch the itch they have for some particular type of technology.



Are there some sorts of imperatives now that are driving people to do that? We talked about the cloud vision, but maybe it’s security, maybe it’s the economics, maybe it’s the energy issues, or maybe it's all those things together.

Herrod: It’s a little of each. First of all, I like the onion analogy, because it makes you cry, and I think that’s also key. But it’s a combination of requirements coming in at the same time that's really causing people to look at it.

Going back to the original discussion, it starts with the fact that there are choices now. Every single day you hear about a new case where a business unit or an employee is able to circumvent IT to scratch the itch they have for some particular type of technology, whether it's using Dropbox instead of the file servers that the company has, buying their own device and bringing it in, or just signing up for Amazon EC2, instead of using their local datacenter. These are all examples of them being able to go around IT.

But what often happens subsequently is that, when a security problem happens, when you realize that you are not in compliance, IT is left holding the bag. So we get an environment here where the user demand can be handled other ways, but IT has to be able to compete with those.

We have to let IT be a service provider and be able to be as responsive with those, so that they can avoid people going around them. But they still need to be responsible to the business when it comes time to show that Sarbanes-Oxley (SOX) compliance is appropriate or to make sure that your customer records aren’t leaked out to everyone else on the Internet.

That unique balance between the user choice and IT control is something we've all seen over the last several decades, and it’s showing up again at an even larger state.

New competition


Gardner: As you pointed out, Steve, IT isn’t just competing against itself. That is to say, maybe a 5 percent or 10 percent improvement over how well it did last year will be viewed as very progressive. But they're competing now against other datacenter architects. Maybe it’s a SaaS provider, maybe it’s a cloud provider, maybe it’s managed service provider (MSP) or telco that's now offering additional services.

We're really up against this notion that if you don’t architect your datacenter with that holistic software-defined mentality, and someone else does that, you're in trouble.

Herrod: It’s a great point. There are rate cards now for what you can use something else for. You might pay 7 cents per hour for this, or "this much" per transaction. IT departments in general have not traditionally had a good way of, first, even knowing how much they are costing, but second, optimizing to be competitive. So there's this awareness now of how much I'm spending and how long it takes. These metrics are causing this.

Gardner: Let’s revisit the context and the history here, looking at virtualization in particular. We've seen it extend beyond servers to data, storage, and also networking. Is this part of what you've got in your vision of software defined? Is it strictly virtualization, or does it encompass more? Help me understand how you've progressed in your thinking along these lines, particularly in regard to virtualization?

Herrod: We'll step back a little bit. VMware, over the last 13 years or so, has done a very good job of completely optimizing how servers are used in the datacenter. You can provision a new virtual machine (VM) in seconds. The cost has gone down in orders of magnitude. We've really done a good job on the compute and memory aspect of a datacenter.

It's absolutely crucial to look at the breadth of things that are involved in the datacenter.



But as you said, a couple of things have to happen from there. It's absolutely crucial to look at the breadth of things that are involved in the datacenter. We talk to customers now, and often they say, "Great, you've just lowered the cost and time taken to provision a new server. But when I put this in production, by the way, I care what LUN it ends up on, I have to look at what VLAN is there, and if it's in the right section of my firewall setup."

It might take seconds to provision a VM, but then it takes five days to get the rest of the solutions around it. So we see, first of all, the need to get the entire datacenter to be as flexible and fast moving as the pure server components are right now.

Again, if you look at the last couple of years, I would rate the industry -- ourselves and others -- as moving forward quite well on the storage side of things. There are still some things to do for sure, but storage, for the most part, has gotten a good head start on being fully virtualized and automated.

The big buzz around the industry right now has been the recognition that the network is the huge remaining barrier to doing what you want in your datacenter. Plenty of startups and all kinds of folks are working on software-defined networking. In fact, that's what we use as the term for the software-defined datacenter, because as networking follows as this big inhibitor, you'll be opened up to having a truly planned datacenter solution in place.

Now, we can break that down a little bit. It's important to talk about the technology piece of this. But when I say software-defined, I really look at three phases of how software comes in and morphs this existing hardware that you have.

The first step

The first step is to abstract away what people are trying to use from how it is being implemented. That's the core of what virtual even means, separating the logical from the physical. It gives you hardware independence. It enables basic mobility and all sorts of other good things.

The second phase is when you then pool all of these abstracted resources into what we call resource pools. Anyone who uses VMware software knows that we create these great clusters of computing horsepower and we allow vMotion and mobility within it.

But you need to think about that same notion of aggregation of resources at the storage and networking levels, so they become this great pool of horsepower that you can then dole out quite effectively. So after you've abstracted and pooled, the final phase is how you now automate the handling of this. This is where the real savings and speed come from.

Once you have pools of resources, when a new request comes in, you should be able to allocate storage, security, networking, and CPU very quickly. Likewise, when it goes away, you should be able to remove it and put it back into the pool.

That's a bit of a mouthful, but that's how I see the expansion. It first goes from just compute into storage, networking, security, and the other parts of the datacenter. Then simultaneously, you're abstracting each of these resources, pooling them, and then automating them.

When a new request comes in, you should be able to allocate storage, security, networking, and CPU very quickly.



Gardner: What's really fascinating to me are the benefits you get by abstracting to a virtualization and software-defined level -- the ability to implement with greater ease -- but that comes with underlying benefits around operations and management.

It seems to me that you can start to dial up and down, demonstrate elasticity at a far greater level, almost at that data-center level, looking at the service-level agreements (SLAs) and the key performance indicators (KPIs) that you need to adhere to and defining your datacenter success through a business metric, like an SLA.

Does it ring true with you that we're talking about some real management and operational efficiencies, as well as implementation efficiencies?

Herrod: It is, Dana, and we talk about it a few different ways. The transformation of datacenters, as we got started, was all about cost savings and capital expenses in financial terms. Let's buy fewer servers. "Let's not build another datacenter."

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

But the second phase, and where most customers are today, is all about operational efficiency. Not only am I buying less hardware, but I can do things where I'm actually able to satisfy, as you said, the KPIs or the SLAs.

Doing even more


I
can make sure that applications are up and running with the level of availability they expect, with less effort, with fewer people, and with easier tools. And when you go from capital expense savings to operational improvements, you impact the ability for IT to do even more.

To take that one level further, whenever I hear people talk about cloud computing -- and everyone talks about this with all sorts of different impressions in mind -- I think of cloud as simply being about more speed. You can do something more quickly. You can expand something more quickly. And that's what this third phase after capital and operational savings is about, that agility to move faster.

As businesses’ success ties so closely to how IT does, the ability to move faster becomes your strategic weapon against someone else. Very core to all this is how can we operate more efficiently, while satisfying the specific needs of applications in this new datacenter.

Gardner: Another area that I hear about benefiting from this software defined datacenter is the ability to better reduce and manage risk, particularly around security issues. You're no longer dealing with multiple parties, like the group overseeing UNIX, the group overseeing PC, the group doing the x86 architectures. The likelihood for process cracks to develop and security issues to unfortunately crop up seem to be more likely under those circumstances.

But when you have got a more organized overview of management operations and architecting at a similar level, you can instantiate the best practices around security. Please address this issue of security as another fruit to be harvested from a software-defined datacenter.

Security means a lot of different things, and it has been affected by a number of different aspects.



Herrod: Security means a lot of different things, and it has been affected by a number of different aspects.

First of all, I agree that the more you can have a homogenous platform or a homogenous team working on something, the less variation and process you end up with, exactly as you said, Dana. That can allow you to be more efficient.

This is a replacement for the traditional world of ITIL, where they had to try to create some standard across very different back ends. That's a natural progression for getting rid of some of the human errors that come into problems.

A more foundational thing that I am excited about with the software-defined datacenter is how, rather than security being these physical concepts that are deployed across the datacenter today, you can really think of security logically as wrapping up your application. You can do some pretty interesting new things.

A quick segue on that -- the way most security works in datacenters today is through statically placed appliances, whether they're firewalls, intrusion detection, or something else. Then the onus is on you to fit your application in the right part of the datacenter to get the right level of protection that you have, and hopefully it doesn’t move out of that protection zone.

Follows the application

What we're able to deliver with the software-defined datacenter is a way that security is a trait associated with the application, and it essentially wraps and follows the application around. You've virtualized your firewall and you've built it into the fabric of how you're automating deployments. I see that as a way to change the game on how tight the security can be around an application, as well as making sure it's always around there when you deploy it.

Gardner: For end users the proof is in how they actually consume, relate to, and interact with the applications. Is there something about the applications specifically that the software-defined datacenter brings, a higher level of user productivity benefits? What's really going to be noticeable for the application level to end users?

Herrod: That's a great question. I'm an infrastructure guy, as are probably many people listening here, and it’s easy to forget that infrastructure is simply a means to an end. It's the way that you run applications that ultimately matters. So you have to look at what an application is and what its ideal state looks like. The idea of the software-defined datacenter is to optimize that application experience.

That very quickly translates into how quickly can I get my application from the time I want it until it's running. It dictates how often this application is up, what kind of scale it can handle as more people come in, and how secure it is. Ultimately, it's about the application. I believe the software-defined datacenter is the way to optimize that application experience for all the users.

Gardner: Steve, how about not just repaving cow paths in terms of how we deploy existing types of applications. Is there something inherent in a software-defined datacenter benefit that will work to our advantage on innovative new types of applications?

We are at a point where, depending on who you listen to, about 60 percent of all server applications are running virtual.



They could be for high performance computing, big data and analytics, or even when we go to mobile and we have location services folded into some of the way that applications are served up, and there is sort of a latency sensitive portion to this. Are there new types of apps that will benefit from this software-defined architecture?

Herrod: This is one of the most profound parts, if we get it right. I've been talking about can we collapse the silos that were created. Can we get all of our existing apps onto this common platform? We're doing quite well on that. We are at a point where, depending on who you listen to, about 60 percent of all server applications are running virtual, which is pretty amazing. But that also means there is 40 percent that aren’t. So I spend a lot of time understanding why they might not be today.

Part of it is that just as businesses get more comfortable and get there, their business critical apps will get onto the system, and that's working well. But there are applications that are emerging, as you talked about, where if we're not careful, they'll create the next generation of silos that we'll be talking about 10 years from now.

I see this all the time. I'll visit a company that has a purely virtualized pool, but they have also created their grid for doing some sort of Monte Carlo simulations or high-performance computing. Or they have virtualized everything except for their unified communication environment, which has a special team and hardware allocated to it.

We spend quite a bit of time right now looking at the impediments to having those run on top of virtualization, which might be performance related or something else. Then going beyond impediments to how can we make them even better when they are run on top of the virtualized platform.

Great applications


Some of the really interesting things we're able to show now with our partners are things I would have never dreamed of as great candidates when we started the company. But we're able to satisfy very strict real-time requirements, which means we can run some great applications used in various sorts of stock trading, but also used in things like voice over IP (VoIP) or video conferencing.

Another big area that's liable to create the next round of silos, if we're not careful, is the big data and Hadoop world. Lots of customers are kicking the tires and creating special clusters and teams to work on that. But just recently, we've shown that the performance of Hadoop on top of vSphere, our virtualization platform, can be great.

We can even show that we can make it far easier to set up. We can make Hadoop more available, meaning it won’t crash as often. And we can even do things where we make it more elastic than it already is. It can suck up as many resources in the software-defined datacenter as it wants, when it needs them, but it can also give them all back when it's not using them.

It’s really exciting to look across all these apps. At this point, I don’t see a reason why we can't get almost any type app that we're looking at today to fit into the software-defined datacenter model.

Gardner: That’s exciting, when we don’t have any of the stragglers or large portions of business functions that are cast off. It seems to me that we've reached the capability of mirroring the entire datacenter, whether it’s for purposes of business continuity or disaster recovery (DR), or backup and recovery. It gives us the choice of where to locate these resources, not at the individual server, virtual machine level, or application level, but really to move the whole darn datacenter, if that’s important, without a penalty.

Very rapidly, this notion of DR has been a driving reason for people to virtualize their datacenter.



For our last blue-sky direction with this conversation, are we at the point where we have fungibility, if you will, of datacenters, or are we getting to that point in the near future, where we can decide at a moment’s notice where we're going to actually put our datacenter, almost location independent?

Herrod: It’s a ways out, before we're just casually moving datacenters around, for sure. But I have seen some use cases today that are showing what's possible, and maybe I'll just give you a couple of examples.

DR has long been one of the real pains for IT to deal with. They have to replicate things across the country and keep two datacenters completely in sync, literally the same hardware, the same firmware layer, and all of that that goes into it.

Very rapidly, this notion of DR has been a driving reason for people to virtualize their datacenter. We have seen many cases now, where you're able to failover your entire datacenter, effectively copying the whole datacenter over to another one, keeping the logical constructs in place, but hosting in a completely different area.

To get that right, your storage needs to be moved, your network identities need to be updated, and those are things that you can script and do in an automated way, once you've virtualized the whole datacenter.

Fun example


A
nother really fun example I see more and more now is, as mergers and acquisitions happen, we've seen several cases where one company buys another. They both had fully virtualized their datacenter and they could put on a giant storage drive the datacenter at one company and begin to bring it up on the other side, once they copied it over there.

So the entire datacenter isn't moved yet, but I think there are clear indications of once you separate out where something runs and how it runs from what you are really after, it opens up the door for a lot of different optimizations.

Gardner: We're coming up on the end of our time, but we also have the big annual VMworld show in San Francisco coming up toward the end of August. I know you can’t pre-announce anything, but perhaps you can give us some themes. We've talked about a lot of things here today, but is there any particular themes that we have hit on that you think are going to be more impactful or more important in terms of what we should expect at VMworld?

Herrod: It will be exciting as always. We have more than 20,000 people expected. What I'm doing here is talking about a vision and generalities of what's happening, but you can certainly imagine that what we will be showing there will be the realities -- the products that prove this, the partnerships that are in place that can help bring it forward, and even some use cases and some success stories.

You need to get to the point where you are leveraging the full automation and mobility that exists today.



So expect it to be certainly giving more detail around this vision and making it very real with announcements and demonstrations.

Gardner: Last question, if I'm a listener here today, I'm intrigued, and I want to start thinking about the datacenter at the software-defined level in order to generate some of the benefits that we have been discussing and some of the vision that we have been painting, what’s a good way to start? How do you begin this process? What are a few foundational directives or directions that you recommend?

Herrod: I think it can sound very, very disruptive to create a new software-defined datacenter, but one of the biggest things that I have been excited about in this technology versus others is that there are a set of steps that you go through, where you're able to get some value along the way, but they are also marching you toward where you ultimately end up.

So to customers who are doing this, presumably most of you have done some basic virtualization, but really you need to get to the point where you are leveraging the full automation and mobility that exists today.

Once you start doing that, you'll find that it obviously is showing you where things can head. But it also changes some of the processes you use at the company, some of the organizational structures that you have there, and you can start to pave the way for the overall datacenter to be virtualized, as you take some of these initial steps.

It’s actually very easy to get started. You can make benefits along the way. Your existing applications and hardware work. So that would be my real entreaty -- use what exists today and get your feet wet, as we deliver the next round heading forward.

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

You may also be interested in:

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to imp...
SYS-CON Events announced today that ReadyTalk, a leading provider of online conferencing and webinar services, has been named Vendor Presentation Sponsor at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. ReadyTalk delivers audio and web conferencing services that inspire collaboration and enable the Future of Work for today’s increasingly digital and mobile workforce. By combining intuitive, innovative tec...
Amazon has gradually rolled out parts of its IoT offerings, but these are just the tip of the iceberg. In addition to optimizing their backend AWS offerings, Amazon is laying the ground work to be a major force in IoT - especially in the connected home and office. In his session at @ThingsExpo, Chris Kocher, founder and managing director of Grey Heron, explained how Amazon is extending its reach to become a major force in IoT by building on its dominant cloud IoT platform, its Dash Button strat...
Connected devices and the industrial internet are growing exponentially every year with Cisco expecting 50 billion devices to be in operation by 2020. In this period of growth, location-based insights are becoming invaluable to many businesses as they adopt new connected technologies. Knowing when and where these devices connect from is critical for a number of scenarios in supply chain management, disaster management, emergency response, M2M, location marketing and more. In his session at @Th...
The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between wh...
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his Day 2 Keynote at @ThingsExpo, Henrik Kenani Dahlgren, Portfolio Marketing Manager at Ericsson, discussed how to plan to cooperate, partner, and form lasting all-star teams to change t...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life sett...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
There are several IoTs: the Industrial Internet, Consumer Wearables, Wearables and Healthcare, Supply Chains, and the movement toward Smart Grids, Cities, Regions, and Nations. There are competing communications standards every step of the way, a bewildering array of sensors and devices, and an entire world of competing data analytics platforms. To some this appears to be chaos. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, Bradley Holt, Developer Advocate a...
SYS-CON Events announced today that Bsquare has been named “Silver Sponsor” of SYS-CON's @ThingsExpo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. For more than two decades, Bsquare has helped its customers extract business value from a broad array of physical assets by making them intelligent, connecting them, and using the data they generate to optimize business processes.
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. Big Data at Cloud Expo - to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, wh...
Cognitive Computing is becoming the foundation for a new generation of solutions that have the potential to transform business. Unlike traditional approaches to building solutions, a cognitive computing approach allows the data to help determine the way applications are designed. This contrasts with conventional software development that begins with defining logic based on the current way a business operates. In her session at 18th Cloud Expo, Judith S. Hurwitz, President and CEO of Hurwitz & ...
Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is expected in the amount of information being processed, managed, analyzed, and acted upon by enterprise IT. This amazing is not part of some distant future - it is happening today. One report shows a 650% increase in enterprise data by 2020. Other estimates are even higher....
In his general session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed cloud as a ‘better data center’ and how it adds new capacity (faster) and improves application availability (redundancy). The cloud is a ‘Dynamic Tool for Dynamic Apps’ and resource allocation is an integral part of your application architecture, so use only the resources you need and allocate /de-allocate resources on the fly.
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
industrial company for a multi-year contract initially valued at over $4.0 million. In addition to DataV software, Bsquare will also provide comprehensive systems integration, support and maintenance services. DataV leverages advanced data analytics, predictive reasoning, data-driven diagnostics, and automated orchestration of remediation actions in order to improve asset uptime while reducing service and warranty costs.
Vidyo, Inc., has joined the Alliance for Open Media. The Alliance for Open Media is a non-profit organization working to define and develop media technologies that address the need for an open standard for video compression and delivery over the web. As a member of the Alliance, Vidyo will collaborate with industry leaders in pursuit of an open and royalty-free AOMedia Video codec, AV1. Vidyo’s contributions to the organization will bring to bear its long history of expertise in codec technolo...