I hope my previous blog gave you got a good start at learning about serverless architecture. As promised, here is some more information on this topic to enrich your understanding of this implementation pattern. That’s right, in my opinion it is an implementation pattern. Because if you dig deeper, it’s not like there aren’t going to be any servers or backend systems. There will still be functions deployed as microservices, but isolated to a level that it’s easy for it to be scaled quickly.
So what makes an application serverless?
There are essentially four features that help convert your traditional functions and services to serverless. These are:
Zero administration – One of the primary goals of the serverless-based microservices model is to provide the freedom to developers to deploy the code without provisioning anything beforehand, or managing anything afterward. No more dependency on the infrastructure/Ops teams. No concept of an instance, OS running on that instance, or any instance management overheads.
Auto-scaling – Secondly, serverless-by-design lets your service providers manage the scaling challenges. There’s no need to fire alerts or write scripts to scale up and down. This allows for handling peaks and turfs in traffic and also the occasional lulls on the weekends, without any manual intervention. Tech serenity, if you ask me!
Pay per use – The concept of Function as a service provides the ability to compute and manage services based on usage, rather than pre-provisioned capacity. No more paying for idle time. The results could be significant, in some cases up to 90% cost savings over a cloud virtual machine. That’s real business value right there.
Increased velocity – Driven by the quick-to-market needs of today’s business models, this whole concept of being serverless gives developers the proverbial ‘wings’ and enables them to deliver functions on the fly. This is great for the Agile model of proving a functional concept and then designing it further.
To summarize, the major advantages of a traditional cloud model vs serverless are:
Cheaper than traditional cloud – you only pay for the time these functions are running
Scalable – support your business spikes and lows more efficiently
Lower human resources – Once its set and configured, theoretically the system does not need human intervention. You do need to monitor the logs and take actions when the system performs out of the set limits
Focus on user experience – this is key for business. As discussed throughout this blog, the whole concept gives business the ability to be agile in a true sense, allows them to try out new things, and fail quicker and move on to business models that actually work. It’s a win-win for any business.
Is serverless the silver bullet?
Well, not exactly. As with any other new technology, it does come with its own challenges. Some of the obvious ones include:
- Vendor lock in – At this point, the cloud service providers do not provide cross compatibility. Why would they? It’s not in their best interest. So as a consumer, you are locked in with the platform where you start. From the developer’s perspective, this is not as bad. Though when they have to integrate with services hosted on a different platform, they may have to negotiate a lot of challenges, such as connectivity, protocols, security and so on.
- Learning curve – Serverless is a new concept and it does take some time for teams to ramp up to make the best out of it. Nevertheless, there are new developments happening all the time. For example, there are a Serverless Kit and a Serverless Platform being designed by some experts, which are supposed to ease all the challenges mentioned here.
- Tip: I’ll be covering the severless kit in one of my upcoming blogs. Do keep an eye out for that!
But is it secure?
Absolutely, as long as you make it so.
As the architects would tell you, the details are in the design! As with any enterprise service, you need to design with security as a primary requirement of your application. You need to account for the fact that users will be interacting with this service directly, sometimes using their credentials. So, it is important to clearly understand how to limit what resources they have access to, what they can and cannot read and write.
It is highly likely that your service would be interfacing with 3rd party services, and it is imperative to consider the access to that service. For example, with Firebase (a real-time streaming database), you write custom security rules that are executed by the database engine to determine which users can read or write from different parts of the database. If you are writing your own micro-services, then you need to ensure that your services check the authenticity of the security token passed and verify the rights of the user.
I hope this blog helped you get a deeper understanding of serverless services. I’ll continue with our series in the next installment, covering a toolkit that helps reduce the burden of starting with this great platform. It has an obvious name of ‘Serverless Framework Toolkit.’ Until then, adios!
Interested in learning more? Contact our team today.
Connect with us on social media!
Check out other blogs from Arun.
Serverless Architectures Series (Part 1: What Is SA?) by Arun Chhatpar