the top [N] reasons that serverless architecture is the best fit for your web application
[More sections coming soon if you have suggestions send them to: mailamankhalid at gmail]
In the last five years, more and more people have shifted to distributed models of software development. The wildly popular(and still relevant) monolithic approach has some obvious limitations, one of them being is being only able to scale vertically.
And the flux of cloud providers offering computation at affordable rates has enabled smaller companies to experiment with their stack. In earlier days the distributed form of software development was limited to large companies and was implemented in the form of SOA.
Microservices may seem like an extension of SOA, and in a way they are. However, they have become much simpler to implement due to recent advances, tools like docker and Kubernates can make handling large clusters much easier. But there is still a lot of dev-ops required to maintain them.
The increase in computation powers has made it much easier for us to use virtualization. Now it is possible to create a virtual machine in a computer, bring it up to running state, make it execute something and destroy it, in a matter of seconds. This led to the birth of serverless architecture.
The purpose of the article is to discuss scenarios were implementing the serverless architecture can be more efficient than traditional approaches. Some topics in this article assume the reader’s application has a monolith structure and is transitioning to a more distributed one, and others describe modules that may need to be implemented from scratch to favor scalability.
While discussing the pricing of various IaaS and PaaS solutions the most common ones that come to mind are Amazon’s EC2, which is charged per hour, Heroku, which charged monthly based on the number and the type of dynos the application has and finally Digital Ocean’s fixed monthly charge per droplet.
It can be argued that PaaS solutions like Google Compute Cloud and Heroku are cheaper than IaaS, primarily because resources allocated to you are managed by the provider and behind the scenes, many users are operating under the same compute space. However, in both these cases, you’re paying for the idle time as well, this is not the case with serverless.
Lambda is Amazon’s serverless solution, and it is charged based on the number of invocations and the time for execution. How that works is, you allocate the lambda function to be executed some memory and you are charged a fixed $0.0000002 per request and the duration your function executed. The more memory your function has, the more it costs. Not to forget the first 1 million requests are free.
Let’s see how much difference it can make in price when compared to using Amazon’s ec2. Both our lambda function and ec2 instance will have 512 MB memory and the ec2 instance will require EBS as well.
Let's assume we execute our lambda functions 100,000 times per month with each function running for an average duration of 3 seconds and we have finished our free 1M requests. And our ec2 instance is running Linux with 30 GB of disk space(the recommended minimum).
Memory Allocated: 0.5 GB
Execution Time: 3000(ms) * 100000
Total Compute: 0.50 GB x 300,000.00 sec
150,000.00 GB-s x 0.0000166667 USD :-
Total 2.50 USD
100,000 requests x 0.0000002 USD :-
Total 0.02 USD
1 instances x 0.0047 USD x 730 hours in month
Total 3.43 USD
30 GB x 0.10 USD x 1 instances
Total 3.0 USD
The cost of EC2 instance is more than double also keep in mind 100000 requests per month is an overkill also the instance selected is an on-demand t3a.nano, which has very low network performance.
You’d be surprised to know how much time is wasted in companies on shallow work, things are just byproducts of cognitive tasks, things that do not contribute to the overall efficiency. For software developers, these tasks may include unforeseen bugs, server maintenance, unnecessary meetings, etc.
In fast-moving companies regardless of the choice of architecture, the technical debt grows over time. Factors like coupling make it much harder to maintain these apps, which in turn makes it a hassle to update these systems. As a result, more time is spent on this filler work than required, which is a blatant waste of the company’s money.
This problem is even bigger in today’s remote work culture because employers prefer hiring programmers on an hourly basis.
Using serverless and PaaS solutions can make one of these problems go away because everything about the infrastructure is managed by the provider.
Moreover, you get mature monitoring for your functions, it is one of the features that are necessary for large applications and making it from scratch can feel like reinventing the wheel(like creating auth from scratch). Google cloud functions have Stackdriver which has features like tracing, logging, alerts, etc, which you easily access through a dashboard.
Ready to use Lambda templates on AWS
Amazon’s Cloudwatch provides similar features to google’s Stackdriver, logs of each of your functions are assigned separate log groups, which can be accessed through the cloud watch dashboard. Lambda also has some predefined templates that you can use to create your serverless functions, however, you shouldn't use them if you're worried about vendor lock-in.
In monolithic apps, which are made using language frameworks like Django, Express, Rails, etc, there are ready to use solutions to implement authentication and authorization, some popular ones that come to mind are Passport for node js, authentication module of Django rest framework, etc.
Implementing the user data this way makes the business logic and user credentials highly coupled, even though this strategy may work initially, in the long run, it's not favorable as it can turn your code to a big ball of mud.
Separation of concerns is desirable for larger applications, more than that when the user base is large or scattered - partition and replication of data is an absolute necessity. The first step towards making independent modules is creating an API gateway.
As you can see the API gateway handles different aspects of the app like auth, sessions, routing, etc. I wouldn't recommend making this from scratch as there are some good ready to use options like AWS API Gateway, Zuul, etc.
Now that the prerequisite is out of the way, let's go through some of the options that can be used to implement Identity and Access management.
If you're looking for a serverless compatible, fully managed, highly customizable and easy to use way to handle auth in your app then look no further. By using AWS API Gateway and Lambda, you can create auth enabled serverless applications.
In order to make this work, the client of the app makes several requests to different entities on AWS to get the desired result, however, for the sake of understanding, we can say that the client takes permission from Cognito to access resources and then directly communicates with them.
We won't get into the implementation details of Cognito but let's discuss a simple architecture using AWS Cognito, DynamoDB, API Gateway, and Lambda. Note that the only way the users of our app can send requests is through API Gateway.
The client lets say a mobile app communicates with Cognito to get user credentials and gets back an Identity Token, Refresh Token and Access Token. Then it uses the Identity Token and Access Token to get temporary credentials to access a resource, which in our case is AWS API Gateway.
Different lambda functions are mapped to the routes of our API Gateway which fetch the desired data. However we don't want users to be able to access the admin APIs of our app, so we define authorization using federated Identities which extends Cognito's functionalities by letting us add IAM roles to our users.
Despite the aforementioned merits, by implementing authorization of the app using Cognito can get you into a situation called vendor lock-in, this is not to say that it is not a powerful solution and fits perfectly with many use-cases, albeit it makes it a challenge to port your code to a different platform.
One of the prime motivations to adopt serverless architecture is the ability to do more with less. However, this purpose would be defeated if one is spending the first two months of development on re-creating a mature authentication and authorization.
Auth0 is another service that manages that for you and its optimized for serverless architecture, with Auth0 you have the option to host it on your own, but if you're taking that route keycloak is also a viable option.
Some popular IDaaS platforms
Just like Cognito, Auth0 as a platform offers many authentication and authorization methods that can be customized to your needs, it takes it a step further by letting you use integrations that extend its base functionalities.
Social Sign in with Auth0
What it means is you can link third-party apps with Auth0, enabling interaction from the app to auth0 or vice-a-versa. A good example of this would be using the GitHub Deployments, to deploy rules, rules configs, connections, database connection scripts, etc.
Rule is a js function that executes when a user of your app authenticates using Auth0, think of them as callbacks that you can customize based on your needs, for eg. fetching profile data after authenticating with Facebook.
With the help of GitHub deployments, you can store configuration details like database connection scripts in a repository of your choice, which Auth0 has access to. Everything on this repository will be automatically deployed to Auth0, every time you make a commit.
A good thing about serverless architectures is that they all focus on separation concerns. Whether you're implementing your own authentication or using an IDaaS, taking this module away from your core business logic gives you great flexibility, although doing this might increase the app’s complexity but it makes your app highly scalable.
At every level of computing, there's a physical limit to what can be stored. Whether it is your storage bocks containing media files or the available RAM for storing in-memory data structures At some point, things need to be flushed to make room for new.
However, a user-facing app can’t just delete all the old data, that slowing them down and start afresh. For example in hotels and supermarkets, video surveillance data can be stored for up to a month. This is even longer in tech companies. To keep the system fat-free it's important to archive data from time to time.
Making archives is a CPU intensive process, depending on the size and type of data it involves one or more of these things - compression, transfer bandwidth, sharding, etc. Carrying this out on the master server can cause your app to slow down or worse downtime…
When it comes to low-cost, durable, and highly available bulk storage there are many options to choose from. The most popular ones are AWS Glacier and Azure Archive Storage. You can get up to 500 GB of storage for as low as 10$ a month(assuming infrequent access). However, it can take up to 3 hours to retrieve data.
Using serverless functions to carry out these things can be beneficial in the following ways:
On-Demand Resource allocation - it's not uncommon to hit your CPU cap while compressing images and videos, the ability to bring up multiple instances of your serverless functions, in that case, can easily avoid this situation.
Clutter-free codebase - Keeping this piece of code separate from business logic is quite intuitive because its an impure aspect of the app, it has nothing to do with the core logic, and besides it gets executed occasionally.
Consider this simple lambda function that compresses text-based data in python, say a .sql file containing insert statements, schema, etc could be very big in size, It might be a good idea to split it before queuing it for compression and use multipart upload.
# Compress using zlib, level 9
The above code has an upload_archive function meant to connect with glacier and upload data, lambda_handler is the main function responsible for IO. It receives the data passed to the lambda function by you and returns the result. The maximum size of payload per invocation is 6 mb for more details on current limits refer to this page.
Amid all this, it's important to note that going for cheaper configurations is not always the right approach with these kinds of operations. A low memory function may finish a task in 200 seconds, and the same job can be done in 12 seconds by a function with high memory. I'll elaborate on this later in the article.
As software developers we have all been subject to the whims of market. Requirements come and go, new modules get added to our apps old ones are discarded. It is a constant effort to understand the requirements of each module, because no two features of the app get the same reception by users.
It is not always easy to make changes in large codebase. Things like backwards compatibility make it even more troublesome to keep track of things. For example it is possible to lose data if you make updates on an old table record with a key that the new schema is not aware of.
Often development teams don't think past reliability of their apps - as long as it works, it’s fine. Few keep maintainability in mind and even fewer focus on evolvability. Usually, development teams are working under pressure of deadlines and budget, so who is to blame?
By using serverless, you can save yourself the trouble of deploying the apps, so this energy can be better spent on things that we wouldn't do otherwise. As I discussed earlier serverless enables software developers to invest their time building new things rather than wasting it on maintenance.
There’s little you can do change the external factors. It's very common for startups to rollout new versions of their mobile apps while older versions are still in use. This triggers a cascading effect of changes across your entire codebase. Your API endpoints have to support different versions, the functions they map to have to be managed, their data transfer objects will be different and so on.
In startups rolling out new features involve great costs. Many great ideas never make it to production because their utility surpassed the effort that would be put into making them. Imagine what would have been lost if PUBG never released a mobile version because it was too much trouble.
Fun Fact: The web swinging mechanic in the 2004 Spider-man 2, was created by a single developer named Jamie Fristrom, that feature alone made the game stand-out, and it wasn't even a part of the development cycle.
It’ll never be easy to keep the codebase clean, while managing so many moving parts. Let’s see how serverless is optimized for handling more volatile modules of our applications.
In every big app, handling versioning is a complex topic that seriously affects how you design your apps. It triggers changes across various parts of the application. There are some details like managing backward compatibility and keeping different versions of code that are in-built in most language frameworks, however this can still clutter the codebase.
Much like most infrastructure related details some aspects of versioning too are managed for you by providers. Consider AWS lambda for example, any service that invokes a lambda function can specify which version it want to execute through its configuration.
Each lambda function that you create has a unique ARN that will be added to the configuration of event source. Using this approach will have you updating the ARN on the event source every time you want to change the version of the function.
Aliases pointing to Different versions of a function
A better way to do this would be to use Aliases. Just like functions each alias has a unique ARN and just like functions they can be assigned to event source. This way you'd never have to make changes to a client directly, simply updating the version number on the alias can do the trick. In the above diagram the three aliases Dev, Staging, and Prod point to two different versions of a function.
This makes it easier to keep up with the different versions of the core logic. One thing that I would like to note here is that not all serverless providers have this feature for example on Google Cloud Functions this is still done manually.
Depending on the API Gateway you're using, there are quite a few ways in which you can implement routing in your app. As discussed earlier in the article the API gateway is very powerful component of your app that manages many other aspects other than routing.
It’s highly recommended to use a premade alternative here because there are a wide range of open source and SaaS providers for this. Here’s a list of examples, each one is customised for various use cases, pick the one best suited for your app.