Should Your Next Project Be Built as a Serverless App?
Going Serverless has been taking off recently. There's conferences popping up and it sounds very compelling. Yay or nay?
I wasn’t going to post this today because I wanted to take the next 2 weeks off from blogging for the holidays but at the same time I got 2 emails this morning from people who are taking my Flask course. Their question was: “should I bother looking into Serverless and is Flask dying?”
I think it’s an interesting topic, so here we are.
Comparing Serverless to Other Deploy Methods
Before we go into what Serverless is exactly, let’s compare it to how you may deploy your applications already, because it will help tie in where Serverless fits into the equation.
IaaS (Infrastructure as a Service)
Typically when you deploy your application, you’re responsible for setting up a server and configuring it to run your app.
Think about a place like DigitalOcean, Google Cloud, AWS, Azure or any other cloud provider. You would rent a server from them, they give you root SSH access and you’re free to do whatever you want (without violating their TOS).
You get to pick the server’s size (memory, CPU, etc.) and if you need to scale out to many servers, you’re responsible for configuring all of that up using whatever tools they give you. Perhaps you’re even using Docker Swarm or Kubernetes.
The takeaway is you’re dealing with things at the server level. You could describe the above type of cloud hosting as IaaS (infrastructure as a service). They provide you the infrastructure, and you do the rest.
PaaS (Platform as a Service)
The next step up from that is PaaS (platform as a service), and this is where services like Heroku come into play. Instead of dealing with raw servers and having to wire everything up, they do it for you at a premium price.
But you’re still dealing with server’ish type of decisions, like having to figure out how many dynos and workers you need to run.
Also, your applications are generally coded the same way as you would with an IaaS hosting provider. You would have a long running web application server that’s always on and is ready to handle traffic. You’re paying for these resources even if no one is visiting your site.
Serverless / FaaS (Functions as a Service)
With Serverless, you don’t even have to think about servers. All you have to worry about are executing functions. Serverless is commonly described as FaaS (Functions as a Service).
Servers are still running your code, but it’s fully abstracted away from you.
But, depending on where you decide to run these functions, you do still need to make infrastructure related decisions, such as how much memory your functions will use and how long they will run for.
For example on AWS Lambda, you get charged for every 100ms of CPU time and functions cannot exceed 15 minutes in execution time. You also get charged more money based on how much memory your functions use.
On the open source front, there’s OpenFaaS which is based on Docker / Kubernetes.
It’s a great choice if you want to develop Serverless-style applications but keep in mind you’re responsible for setting up your own servers to run it, so this kind of puts it into its own category of use cases.
Perhaps you could build your own AWS Lambda competitor out of it, or just supply your development team a way to internally deploy their own Serverless apps while you as an ops person manages the infrastructure on an IaaS of your choosing.
As you can see, there’s a lot of movement in the Serverless world.
I’ll admit, I’m not a Serverless expert, nor have I built any real world applications with it but I have seen large scale Serverless deployments running in production. I also went to Serverless Conf 2016 in NY to meet up with some friends and check it out.
There’s a few reasons why I haven’t jumped on board with Serverless. Keep in mind these are just my opinions and maybe I’m flat out wrong, but here’s how I see it.
Complexity Is through the Roof
On one side of the spectrum, we have monolith applications deployed to 1 server. This can go a REALLY long ways and the complexity is very low. You just build your app like normal and deploy it to 1 server.
Of course there’s many things to learn along the way with the above set up but the mental model of how everything works isn’t too scary. I think an entry level developer could pick up how to do this without any issues.
Then in the middle, we have distributed microservices. This is many services running across many servers. The complexity of just microservices alone is very high. Suddenly you need to figure out how to interface and run multiple programs together.
Factoring in distribution makes it even harder, because you have to deal with high availability, load balancing, zero down time updates and all of that fun stuff.
This is where you can quickly get into “devops hell”. You’re dealing with many deploy scripts, playbooks and container orchestration tools. Before you know it, you’re spending most of your time on that instead of developing your awesome application.
Then we have Serverless. Now we’re just shifting problems around here. Instead of dealing with distribution problems like before, we’re dealing with massive architectural issues.
Designing an entire application as individual functions is a non-trivial task. Each individual function is way easier to reason about and from a development POV, that’s a good thing, but the mental model is more complex.
Simple things like “how do I write and test my code in development?” go unanswered without a lot of research. The entire programming model is shifted.
You are now responsible for wiring up tons of external services to work together. For example in AWS land, you might tie together API Gateway, DynamoDB, SNS and a few other services to tie all of your functions together. Then on Google Cloud or Azure you’ll end up using their variants instead.
Since you’re coupling together so many different services, page load speeds are going to suffer. I’ve seen this first hand in large Serverless deployments. Load times are noticeably higher than a traditionally hosted site which means your visitors are going to suffer.
Vendor Lock in Is Pretty High
That leads us to vendor lock in. I’ll admit I think this argument is partly BS because once you start to accumulate a lot of data, it’s really difficult to migrate to different hosting providers.
Let’s be honest here, you’re not just going to switch between providers every couple of weeks because you can. No one does that in the real world.
Chances are you will stick with 1 provider until another provider makes an extremely compelling case to switch, and then you will spend a lot of effort migrating your data from 1 place to another (which is a huge undertaking and would affect Serverless too).
So that’s why I think vendor lock in is sort of a “meh” argument, but the facts are, with Serverless your vendor lock in level is much higher than other deploy methods.
With a monolith / 1 server approach, I can jump between DigitalOcean, AWS or any other cloud provider in literally 5 minutes plus the amount of time it takes to export / import my data.
With the Serverless approach you need to learn the ins and outs of the new provider’s Serverless implementation details. Everything changes and it’s going to be a big deal.
You Are Encouraged to Make Dangerous Decisions
Since your application is now much harder to reason about and you depend on all of these platform specific services (all of which you have to pay for by the way), you may find yourself reaching your hand in the SAAS app API cookie jar.
What I mean by that is, you may suddenly find yourself saying “man, I’ve spent 6 days trying to get authentication to work with Serverless and I can’t get it working”.
Then, you may decide to sign up for a service like Auth0. Now your business has a core dependency on Auth0 being around. If they change something on their end that you don’t like (out of your control), you’re screwed.
Even worse, what happens if after a few years they decide they are going to triple their price, or even worse, they decide to shut down?
This happens a lot in the tech world. Remember when Facebook took down Parse?
Of course, you’re not forced to do this, but the whole “omg no servers” mantra really goes hand in hand with “omg let me offload this to someone else” mentality.
This can be very dangerous if your application depends on many different external SAAS businesses. I totally get depending on platform specific services (like S3), but a third party SAAS app provider is a totally different animal.
If you get trigger happy with using them, your costs are going to drastically increase along with your page load speeds. It’s also going to make your app less reliable simply because you’re making more external network calls and depending on someone else to keep their services running.
Is Serverless Worth Using Today?
I’ll leave that up to you to decide. As a developer and teacher, the last thing I want to do is talk you out of learning something new.
Personally I’m going to avoid Serverless for now. I think in the very long term (10+ years), some form of Serverless may become the norm, but I don’t think it will be anything like the current implementation of it.
But, I am thankful people are stepping up today to begin that journey.
Right now it’s the Serverless wild west out there. Every main cloud provider is trying to invent their own Serverless solution and on top of that, there’s a huge community segmentation because everyone is trying to make their own frameworks on top of them.
I think traditional web frameworks like Flask and Rails being deployed to 1 or more self managed servers will be around for many years to come and I will be happily using them.
Have you gone Serverless? How did it go for you?