Co-authored by Chris Wolf, VP and CTO of Global Field & Industry, and Mark Peek, Principal Engineer
Serverless computing is becoming increasingly popular to create micro services without needing to worry about how the functions are orchestrated or scaled. This frees developers to think more directly about what is important to them, delivering value to their customers and business. Serverless provides a very simple development model where you can write and execute functions without having to worry about code to compile or underlying infrastructure. Much of the intelligence and automation that occurs with the Amazon Echo is backed by serverless functions. The Echo alone has led to millions of serverless functions that execute on the AWS cloud today.
Serverless or FaaS?
There was a recent debate in the Cloud Native Computing Foundation (CNCF) serverless working group about the difference between serverless and FaaS (Functions as a Service). In a public or managed cloud infrastructure, you do not need to know what servers are running your workloads or need to worry about the orchestration. This is the most typical use of the term serverless. On the other hand, you might have a private cloud that needs to be managed by your business which exposes a way to run functions. This is the more typical use of Functions as a Service (FaaS). While FaaS is more technically accurate, the term “serverless” has taken hold and people tend to use “serverless” generically to cover both of these terms. In a lot of ways this depends on your role and perspective with a developer always seeing it as serverless and to a cloud admin it is FaaS.
Serverless Computing Attributes
In general, the following attributes commonly describe serverless or FaaS:
- Easy way to implement microservices
- Auto-scaled
- Stateless
- Event driven
- Low cost (you pay for the time that is required for the function to execute)
- No servers to provision or manage (public/managed cloud)
Let’s talk about these attributes. Microservices can be created in a number of different ways with containers or pods being the current most popular way. Being able to just create a function and letting the infrastructure take care of orchestrating, scheduling, and auto-scaling the workloads based on the events coming in is extremely compelling to many organizations. This provides resilience to sudden bursts of asynchronous events coming in as the auto-scaling will handle the inbound load. With services like AWS Lambda you truly get a serverless infrastructure that manages all of the backend orchestration. However, FaaS is still important to private clouds given the ease of implementing the microservice.
Serverless/FaaS on VMware Infrastructure
As we look at compute there is the traditional VM’s/Instances, containers, and now functions. Each of these have unique characteristics and continue to have real world use cases. As we look at applications designed with cloud native architectures, we see the use of serverless as a design pattern emerging. We are looking to support serverless for private clouds to support the use of this design pattern and the portability across clouds of the functions.
In Tuesday’s VMworld keynote, we highlighted the common use cases for FaaS on VMware infrastructure. Over the past year, we have heard from our customers and the community at large that there are several use cases where VMware should empower FaaS and serverless to run at the edge or in the data center. Those include low latency requirements, privacy/control, and data locality.
Let’s start with latency. If you consider the earlier Amazon Echo example, it can take on average 2-3 seconds for Alexa to complete an action based on a voice command. That’s fine in your home, but that’s an eternity for many IT use cases. Consider an automated action that might be required based on a heat sensor trigger in a manufacturing plant. Three seconds could be the difference between a minor repair and a burned-out machine. Furthermore, manufacturing organizations see a tight correlation between automated actions and IoT sensor event triggers. Triggered actions need to happen in real-time, and are best enabled by functions that can fully execute as close to the sensors as possible.
We have also heard about privacy and control concerns. That is especially true in the financial services vertical, where organizations are concerned about functions executing in the “cloud” with no visibility or control over where data is stored and processed. These organizations want full privacy and isolation of functions to go along with a full audit trail to satisfy compliance mandates.
Data locale is one other common concern. If I want tightly integrate business intelligence and associated actions with data being mined from IoT sensors, it’s impossible in many use cases to move that data to a cloud environment. For example, a typical IoT-enabled aircraft can generate more than 500 GB of data per flight. It’s easier to run analytics services on the plane in real-time rather than try and move the data. In addition, many manufacturing organizations expect their larger factories to generate more the 1 PB of sensor data per day by 2020, again requiring a range of services to be co-located in the factory.
Functions are designed to quickly complete a task. On vSphere or VMware Cloud Foundation, we can do this by running functions in ephemeral containers. The high level FaaS architecture that we described in the VMworld keynote is shown below.
The containers and their associated virtual infrastructure (e.g., storage, networks, firewalls, and load balancers) can be created on-demand and destroyed as soon as the function execution completes. This can provide stateless end-to-end function execution, making it far more difficult for malware to insert itself into the infrastructure. While vSphere technology such as Instant Clones allow sub-second VM/container creation, we have also done work on auto-scheduling and container-warming to all for near instantaneous function execution on vSphere. Finally, our NSX technology can be used to offer microsegmentation on a per-function basis. So now instead of securing and isolating VMs or containers, you can imagine a future where that isolation is happening at the individual function level.
We’re Just Getting Started
There are numerous private FaaS projects or offerings that could operate in a private VMware-based environment. Those include OpenWhisk, Spring Cloud Functions, and the Azure Functions Runtime, among others. It is still early days for serverless as we are also seeing an increasing number of open source projects emerging to implement serverless/FaaS. And the work within the CNCF serverless working group is uncovering the need for some commonality in functions, eventing, services, and function runtimes.
Serverless/FaaS is one example of what initially began as a pure cloud service to also find a home at the data center or edge. There will be more. Functions interact with a variety of cloud services and it’s only natural to imagine a future where a serverless/FaaS platform at the edge will also include a number of data and analytics services as well. Functions offer the possibility to modernize a traditional app without having to rewrite it. There is significant potential in the innovation velocity that serverless/FaaS will usher in. VMware is doing research into this area and will be working with many of our partners to deliver a production ready product for our customers.