Skip to content

The why and what of serverless technology

Serverless stack is quickly becoming a viable alternative for traditional cloud- or server-based deployments. But is serverless the right approach for your application backend infrastructure? This post aims to find out.

Serverless stack is quickly becoming a viable alternative for traditional cloud- or server-based deployments. But is serverless the right approach for your application backend infrastructure? This post aims to find out.

Serverless technology  has come a long way since its inception and popularisation by AWS Lambdas in 2014. These days, other cloud service providers also have Lambda equivalents, including cloud and Azure functions for Google Cloud Platform and Microsoft Azure platforms respectively. In this post, we are going to focus on AWS Lambdas since they are the most popular and widely used.

Specifically, we plan to cover the basics of serverless technologies and the problem serverless attempts to solve. In a follow-up post – Developing, deploying and monitoring Lambda functions – we will use the Node.js stack (since serverless architecture supports many different programming languages/runtimes) to write, deploy and monitor our Lambda functions fronted by an API gateway service on the AWS cloud.

What is serverless

The serverless architecture movement introduced a mental switch from the way we have traditionally built applications. In the traditional scene, application backends are built and deployed to servers either on premises or in the cloud. While this approach worked, it posed some challenges in terms of scalability (especially auto-scaling), management costs, time, FTE resources and so on.

DevOps practitioners and cloud engineers have to man these servers, monitor their performance and ensure they are always up and running with minimal downtime. Wouldn’t it be nice to offload this sort of control to a machine instead, since handling workloads that require compute capacity is the true essence of the cloud? This is what AWS Lambdas brought to the table.

Although the serverless architecture solves lots of problems and challenges posed by the traditional client-server model, the aim isn’t to completely replace or fade out these kinds of architecture, but rather complement them. This is why serverless tech is suitable for specific kinds of applications and solutions listed in the AWS use case documentation resource.

Overall, since Lambdas have a default timeout of 15 minutes per function invocation,  they are suitable for shorter, event-heavy workloads. In our next section, let’s look at the advantages behind the serverless approach.

Why serverless

AWS Lambdas presented a new approach to building applications and deploying them to the cloud. In this setup, there is no need to spin up any clusters/instances or deploy applications to any physical servers as we have been used to. Instead, we write code or set up AWS services that can trigger functions which are then run based on the occurrence of certain events. An HTTP call would be a simple example.

This offers many benefits. Firstly, server management is no longer necessary as all we have to do is write functions called Lambdas, deploy those functions and set triggers that run them for us. With this set up, the computing or logic tier is then handled by Lambdas.

This means that we no longer have to care about any underlying infrastructure (AWS manages it for us), which is a good thing since infrastructure management comes with lots of challenges in terms of cost, scalability and FTE requirements.

On the business side, these effects trickle down and affect how operations management is handled moving forward, since setting up infrastructure is time-consuming. Businesses that understand the benefits of this approach can therefore go all-in on the core logic and business idea they are trying to implement by following a “business driven development approach” for their specific use case.

In essence, with the serverless architecture, businesses can bypass the IT layer of their organisation. If you want to learn more about this approach, do check out our earlier blog post here for a comprehensive discussion of cloud-native maturity levels.

Advantages of the serverless model: price and automatic scalability

Another win for serverless is that we only pay for the compute time (resources consumed) needed to run deployed Lambda functions. This is known as the pay-as-you-use model, which differs from traditional server deployments, where we pay by the second while our applications are up and running on a server infrastructure or cluster.

The serverless approach offers a cheaper alternative for applications that have inconsistent requests or irregular traffic patterns to your application.

Lambda architecture is usually set up to reduce either the overprovisioning or underprovisioning of resources, as Lambda functions can scale to the load. Serverless also offers horizontal scalability to cater for times of peak demand. In a traditional web apps deployment, DevOps practitioners would probably have to spin up multiple new instances in case of a sudden spike in traffic.

With serverless technology, our applications (in this case functions) become easier to scale, as this is automatically handled by cloud providers based on work-load. So as developers, we can be rest assured that our applications will continue to work as is, even at critical times or moments of higher demand.

While serverless has lots of use cases and benefits, it also comes with some challenges, chief of which are cold starts. Cold starts in serverless are a phenomenon where functions fail to return responses as fast as we would expect them to when called for the first time after a long while.

This startup latency results from Lambdas getting spun up in isolated containers. There is a wait time needed for the function invocation to check for the availability of already-running containers, and if none are currently available, a new container is spun up. This can cause delays and, sometimes, issues with function timeouts.

It is also worth noting that latency due to cold starts varies not only from one runtime or language to another, but also from one cloud service provider to another.

In terms of cost difference across various platforms, AWS Lambdas offer a lower startup latency but a significantly higher expense when handling high-volume traffic workloads when compared to  Google cloud functions. These comparisons are explored further in our earlier blog post on serverless FaaS computing costs.

Finally, Lambda functions have continued to evolve over the years, and it is likely that wait times resulting from cold starts have been reduced to the barest minimum. For more information on AWS Lambda functions, please check out this guide.


In this post, we have covered the broad strokes of serverless architecture and why you should consider the approach for your next application deployment. Serverless has come a long way and keeps getting better.

Until now, Lambda function/code execution contexts have been allotted a memory limit of 512MB, which acts as temporary storage space for caching data and performing processing tasks. However, this has been a huge challenge for heavy workloads, including big-data applications and ETL processes requiring larger volumes of temporary storage needed for data processing.

The solution was to save that data to s3 buckets and increase Lambda function memory for faster processing, which usually led to increased prices. However, AWS has recently increased the size of Lambda’s ephemeral storage capacity (/tmp), which now allows deploying functions of upto 10GB in size, providing the means to support larger workloads such as running machine-learning models and ETL jobs for large data applications. More details are available in the AWS documentation.

In our next post, we are going to go into more detail about Lambda functions. Until then, cheers and thanks for stopping by.