A Tenth Revolution Group Company

Ask the Expert: 8 challenges teams face when doing serverless

Serverless is still a young branch of engineering, but since its dawn, it has ushered in a technological shift that brings high availability, auto-scaling and higher security to users right out of the box.

Add in the fact that you don’t need to manage any servers or containers, and you pay only for what you use, and you can understand why serverless is bringing huge savings and a faster path to market to those adopting it.

But, there’s no such thing as a free lunch and when you are building a serverless team, there will be challenges along the way.

In serverless, many things are different. New technology and new tools require time to learn. Architecture is different because the goal is to use as many cloud provided services as possible. Since the solution consists of many small services, more energy is spent on monitoring and analyzing how they work together.

Development is concentrated on the cloud, which means programmers must be able to use the cloud as their development sandbox. With serverless, developers are involved in building the whole infrastructure; they don’t just write code.

Here are some key mistakes that often crop up when getting to grips with serverless.

#1 Underestimating the learning curve

Like with any technology, there’s a learning curve with serverless. While it’s extremely simple to deploy one function, designing a complex system takes time. Your team needs to learn to use many new services, get to grips with their quirks, new design patterns, new tools and so on.

How much do AWS serverless professionals get paid?
Find out the latest salary averages, key industry insights, and invaluable career advice for AWS professionals across the world with the Jefferson Frank AWS Careers and Hiring Guide; the ultimate guide for anyone looking to build a rewarding career in AWS.

Get your copy

There are a lot of courses, books, blogs and other content available to help you learn the basics, but there aren’t as many resources for advanced stuff. One of the best is Production-Ready Serverless, an online course by AWS Hero Yan Cui. A lot of knowledge remains scattered between blogs and other online resources.

#2 Not embracing new architecture patterns

Serverless comes with new design patterns that you’ll have to learn. You must fully embrace the event-driven approach. You build a solution from a lot of services if possible provided by the cloud provider. The system reacts to events that move data/requests from service to service.

The problem with new technologies is that we want to use them the same way as we used the old ones, but with serverless, that approach can give rise to some common design mistakes:

Building a serverless monolith

The main design guideline for serverless is to split the system into small parts. You must not stash everything to one function; it’ll be hard to manage and will increase cold starts.

Not designing for idempotency

In a distributed system, errors happen and you should anticipate that. It’s not that serverless is unreliable; anticipating errors is, in most cases, built into the core parts of serverless.

In the case of AWS, most services that trigger Lambda retry the call in case of failure. However, replaying the same code brings consequences, which is why functions have to be idempotent. For example, if the same order is received twice, you shouldn’t end up shipping two products: you have to perform checks if the order has already been processed at the beginning of the function, and just before finishing.

Consistency

Anyone who’s learned about SQL databases will know that strong consistency is something that’s drilled into you from day one. But in serverless, data flows through service to service, from function to function, and commonly ends its journey in a large NoSQL database, which prefers eventual consistency by design. Because each processing takes time, you cannot have immediate consistency backed up with transactions as we used to have. The data is consistent in the end.

Read more about architecture mistakes in serverless here.

#3 Bumping heads with DevOps

When developing functions, you’d usually use tools like Serverless Framework and SAM (and many others). Their main features are:

  • Running and debugging functions locally
  • Local emulation of basic services
  • Packaging and deploying to the cloud environment

These tools are essential for developers, but they can sometimes cause conflict with DevOps teams that writes scripts to bootstrap infrastructure as a part of an “infrastructure as code” approach. DevOps teams usually use CloudFormation or Terraforms when working with AWS. The new tool AWS CDK is also getting a lot of attention.

Serverless Framework and SAM both generate CloudFormation templates, but these are auto-generated and not handwritten. As such, it might not be ideal from a DevOps point of view since it’s not fully customizable, and a DevOps team may not be happy that part of the infrastructure is built with different tools.

If the project is purely serverless, it makes sense to only use tools that are built for that. If that’s not an option, unfortunately, there’s no silver bullet, though you do have a few solutions you can try:

Deploying part of infrastructure with one tool and part with another
The main problem here is using resources that were built in another stack. If both parts are built with CloudFormation (that includes Serverless Framework or SAM) you can reference output variables.

In the case of Terraform, you have several options, for instance adding another CloudFormation stack to output variables. The Serverless framework also lets you reference variables from a JSON file in S3, SSM Parameter Store, Secrets Manager.

Using serverless tools only for development
This means doing twice all the configuration of infrastructure, however.

Not using serverless tools
For local development, you can manually call the code you’re developing or just call the core business logic. You can also build some simple tools yourself. This approach, of course, is very limiting.

#4 Overlooking observability

Observability means monitoring, logging, alerts, and distributed tracing. With serverless, errors can be much harder to resolve, so more energy needs to be spent on learning and using tools and services for serverless observability.

Don’t make the mistake of thinking you can professionally handle a serverless solution without a solid knowledge of those tools. There are also costs related to their use, which can easily, if you’re not careful, exceed the cost of the main infrastructure.

Serverless solutions are a mesh of different services connected together. When an error occurs, it‘s not enough to look at logs of that one service, as the cause of the error can be somewhere further up the pipeline. Without analyzing the whole process flow, it’s impossible to identify the root cause of the problem. For that purpose, logs must have some common identification called correlation ID. This allows you to connect different logs that belong to the same flow.

Take the hassle out of job hunting.
Register now, and we’ll get to work looking for jobs that match your experience and your requirements.

Get started.

The next step is having a tool for distributed tracing. These tools allow you to analyze each request, how it passed through different services, how much time is spent in each part, and where the error occurred. Data is visually represented, making it extremely useful and near-indispensable in serverless.

For serverless on AWS, you must have a deep understanding of CloudWatch services including Logs, Insights and Alarms. Insights enable you through search through multiple logs with a special query language. For distributed tracing, you can use AWS X-Ray which is an amazing tool.

Amazon recently released CloudWatch ServiceLens, which bridges the gap between CloudWatch Logs and X-Ray. It enables you to visualize and analyze the health, performance, and availability of your applications in a single place. Giving you a complete view of your applications and their dependencies, ServiceLens also helps you find performance bottlenecks and isolate root causes of application issues.

Third-party services that are built for serverless can be very helpful: Dashbird, Datadog, Epsagon, Lumigo, New Relic, Serverless Framework, Thundra.

#5 Getting sucked in by the promise of shiny new tools

New technology does not solve all problems. Yes, it solves some, but it can often create new ones in the process. Every service has limitations that you should be aware of before jumping in. For example, AWS Lambda has a maximum memory of 3008 MB, a timeout of 15 minutes, and a storage limit of 512 MB.

Cold starts prevented users from achieving constant ultra-low latency in AWS Lambda. Cold starts occur when there is no function that can immediately handle the request, and a new container has to boot up. This issue can be mitigated with a new feature named Provisioned Concurrency. It enables you to have a number of prepared functions ready all the time. This, of course, comes with additional costs.

One of the more painful mistakes I’ve seen made when it comes to the oversubscription of new tools is people using a NoSQL database for everything. NoSQL databases (for example, DynamoDB) are very popular, thanks in no small part to the fact that they’re cheap and incredibly scalable. However, they also have many limitations, most notably that they do not support complex queries.

Somebody with preexisting knowledge of SQL databases would find creating data models with NoSQL really unintuitive and hard to grasp. The data model should be designed for a fast read, and for that reason, data is commonly duplicated. The sacred rule of data normalization that comes from SQL data modes is discouraged here. And most importantly, access patterns must be well defined at the very start of the design phase.

The same thing goes for transactions; because of eventual consistency, you’ll rarely use transactions, which are also very limited here.

Remember, newer tools aren’t always automatically the best choice; if you don’t need high scalability, it could be much simpler to use SQL database.

Read more about serverless limitations here.

#6 Undervaluing the important role of developers

Developers should be included in the whole design process. Serverless is sometimes marked as a “NoOps” solution, meaning you do not need DevOps engineers to configure and manage infrastructure.

Of course, that is not entirely true. However, developers do and should take a bigger role in developing infrastructure. For a smaller project, they can manage everything by themselves; this means they need more knowledge and they should take more responsibility.

In addition, this can also mean resistance from DevOps engineers whose role is diminishing or sometimes completely removed.

#7 Not giving developers a cloud environment

The philosophy of serverless is to develop less and to rely on cloud services as much as you can. This approach tends to be cheaper, makes it faster to build, and the end result requires significantly less maintenance. But developing using cloud services also makes it hard—if not impossible—to emulate everything on the local machine.

That means that developers should have their own cloud environment so they can deploy directly from the local machine. A new cloud environment is cheap when using serverless as you only pay for what you use. If you use an “infrastructure as code” approach (and you should!), a cloud environment is also very easy to create.

With a cloud environment, you still have development, test, stage, and of course, production deployable by CI/CD like always. But in addition to that, you also have extra environments that developers can use individually to deploy from the local machine.

Here are some common approaches to providing developers with a cloud sandbox:

  • Environment per feature branch
  • Environment per developer
  • Multiple development environments per team — developers share environment on a case by case

You can, of course, combine any of those approaches. The important part is deployment from the local machine, simply because it’s faster. Needless to say, you need a fast internet connection.

That said, it’s not always possible to have a lot of environments. That might be because you depend on other parts of the infrastructure that isn’t serverless and therefore isn’t easy to replicate. Or perhaps you rely on a large database that you can’t have an unlimited number of copies of. The solution may be that you have to share part of the infrastructure.

#8 Trying to go it alone

The most frustrating mistakes are often those that are born from a lack of knowledge early on. A few years ago, as serverless was just finding its feet, it was difficult to get up to speed as there wasn’t much knowledge, experience, or best practice available to be shared.

Those days are over: aside from a large number of courses, blogs, and other learning material, you can get professional consulting from a lot of experts, naming Yan Cui and Jeremy Daly, and companies like Serverless Guru and Trek10. Serverless enthusiasts are obsessed with the impact that serverless can have on building IT systems and most of them, including me, would gladly offer you free advice.

Your next great AWS role is waiting for you.
Browse the largest selection of AWS jobs in one place.

View AWS jobs

Many of these serverless devotees also produce unlimited online content that you should definitely check out. Jeremy Daly puts out an amazing weekly newsletter, and Yan Cui writes a lot of extremely high-quality articles.

If you’re not sure about your design, you could benefit from getting a professional to check it out. Getting someone who has experience under their belts involved in your project could save you an enormous amount of time, money, and energy.

Escaping a world of uncertainty and lack of knowledge is easy with the recruitment of experts. You can turn to the recruiting agency that is specialized in hiring AWS experts—Jefferson Frank.

About the author

Marko - Serverless Life

Marko is an AWS-certified full stack developer and serverless advocate with more than 15 years’ experience in the industry. He currently works for Swedish company, tretton37 (1337), a custom software and knowledge-based company home to the most talented craftsmen across multi-international offices. Tretton37 believes in high-quality software to help clients achieve their most ambitious goals. Marko also runs his own successful blog.

Blog EmailTwitterGitHubLinkedIn