What is Serverless Security?

Serverless security requires a paradigm shift in how organizations view application security. Instead of building security around the application itself using Next Generation Firewalls, organizations must additionally build security around the functions within the applications hosted by third party cloud providers. This additional layer of security ensures proper application hardening and least privilege access control so each function does no more and no less than what it is designed to do- helping organizations to improve their security posture and maintain compliance.

Schedule a Demo Serverless Security Solutions

What is Serverless Security?

What is Serverless Computing?

Serverless computing refers to a cloud-computing model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. AWS Lambda Functions, Google Cloud Functions, and Azure Functions are popular serverless frameworks that build applications.

 

A serverless architecture provides the benefit of automated, nearly infinite scaling. Very little stands between developers and deployed code, which speeds time to market and makes it easier to maintain and test individual functions. Finally, the actual amount of application resources consumed impacts pricing, meaning you pay only for what you use, resulting in lower costs.

 

Serverless represents an additional shift of responsibilities from the customer to the cloud provider. With no infrastructure involved, there is a significant decrease in the operations overhead.

 

Shifting infrastructure management to your cloud provider enables you to focus on developing solutions to serve your organization and customers. It helps you maintain focus on your unique competitive advantages, and frequently results in cost-savings not just on compute, but also from shifting people to development.

How Does Serverless Improve Security?

Here are some key points:

 

  1. Cloud Providers Handle Operating System, Runtime Security, and Patching. In deploying serverless applications, you cede control over most of the stack to your cloud provider, and they provide services such as key management. You no longer own OS hardening, admin rights, SSH, and segmentation. AWS, Microsoft, and Google are reliable in keeping their parts of the stack patched and secured, so giving them a larger portion of the stack certainly improves things on that end.
  2. Stateless/Ephemeral. Additionally, the ephemeral, stateless nature of serverless compute makes attackers’ lives harder. Serverless functions like AWS Lambda run for a few seconds and then die. Containers get recycled. The fact that serverless functions come and go, having no memory, reduces the risk of long-term attacks.
  3. Visibility into Serverless Applications – The Benefit. The fact that serverless applications are structured as a large number of small functions in the cloud provides a fantastic opportunity for security. Application security tools often go to incredible lengths to analyze and instrument your packaged application just to be able to observe or filter the internal flow of your application.
  4. Smaller Microservices = The Ability to Construct Suitable, Minimal Roles for Each Function. Moving to smaller microservices enables you to do more fine-grained IAM. You have the opportunity to apply security policies to each of those small things, which can significantly reduce your attack surface.

 

As long as any function within a container needs access to read from S3, all functions within that container would also have that privilege. With AWS Lambda, you have the opportunity to apply privileges to individual functions, and ensure such privileges are restricted to only the smallest scope necessary. If there is a vulnerability in one of your functions, an attacker will only get access to the limited capabilities of that function, not the large set of permissions to grant a container.

What Are Serverless Security Challenges?

With the changing structure of serverless applications, some new challenges arise.

 

  1. Security Visibility Becomes More Difficult. The total amount of information and number of resources increases with serverless. This hinders your ability to make sense of all of the data. With a billion events in your log every day, it is challenging to obtain intelligence from the mountains of data for true observability.
  2. Protocols, Vectors, and Attack Points have Multiplied to Every Function, and Protocol = a potential point of attack. This requires unique approaches for Google Functions, Azure Functions, and AWS Lambda security.
  3. More Resources = More Permissions to Manage. More resources equals more permissions to manage, creating challenges in determining permissions for all these interactions. Automated technology can detect configuration risks and automatically generates least-privilege function permissions.
  4. Observability into Serverless Apps – The Challenge. Serverless apps use different services from various cloud providers, across multiple versions and regions. To understand your attack surface and potential risks, you need a comprehensive view of your entire serverless ecosystem. As your app propagates, this security-focused view can be increasingly challenging to build and maintain.

Where to Deploy Serverless Security?

With serverless applications, there is nowhere to place classic security such as WAF, firewall and IDS. Building walls between attackers and resources is not simple for several reasons.

 

  1. While faster, more frequent deployments can be very positive, the velocity of serverless can raise new challenges in configuring security.
  2. Security tools could add time to processing, which you must multiply by all requests. Fortunately, most serverless security best practices do not require any additional processing time.
  3. Erosion of the Perimeter. Traditional applications had a clear boundary. The outside and inside were distinct, and we could do security at the perimeter. While it is not ideal to have security remain exclusively at the perimeter, it was still possible to build a wall.

 

Serverless applications are more porous and fine-grained. Comprising of dozens or hundreds of functions, serverless applications are tiny microservices with its own policies, role, API, audit trail, etc. This changes the attack surface, instead of a small number of entry points with lots of functionality hidden behind each one, there are now more entry points, each with a small part of the app behind it. Defending your application now requires thinking about each entry point.

 

Various events can trigger functions, such as:

  • Cloud storage events (e.g. AWS S3, Azure Blob storage, Google Cloud Storage)
  • Stream data processing (e.g. AWS Kinesis)
  • Databases changes (e.g. AWS DynamoDB, Azure CosmosDB)
  • Code modifications (e.g. AWS CodeCommit)
  • Notifications (e.g., SMS, Emails, IoT)

What are Serverless Security Threats?

While the motivations of attackers remain the same, the tactics they will use with serverless applications must change. Following are some of the serverless security threats unique to this new application architecture.

1. The Threat of Over-Privileged Functions

With serverless applications, you have the opportunity to apply privileges to individual functions, and ensure such privileges are restricted to only the smallest scope necessary. This can enable you to significantly minimize your attack surface, as well as, mitigate the impact of any attack.

 

Unfortunately, recent research from Check Point found that the vast majority of developers are not taking advantage of this opportunity. Our research discovered that 98 percent of functions in serverless applications are at risk, with 16 percent considered “serious.” Additionally, most of these functions are provisioned with more permissions than they require which could be removed to improve the security of the function and the application.

 

When analyzing functions, Check Point assigns a risk score to each function. This is based on the posture weaknesses discovered, and factors in not only the nature of the weakness, but also the context within which it occurs. After scanning tens of thousands of functions in live applications, we found that most serverless applications are simply not being deployed as securely as they need to be to minimize risks. The greatest security posture issues Check Point uncovered are unnecessary permissions, while the remainder are with vulnerable code and configurations.

2. The Groundhog Day Attack

The fact that serverless functions are ephemeral and short-lived makes it more difficult for attackers to persist in your applications long term. Moreover, this is one of the many security advantages of serverless. However, simply because this makes life more difficult for attackers does not mean that they will stop the attacks; they will just change the strategy.

 

The short duration of serverless functions means that serverless security threats may change shape. Attackers may construct a much shorter attack that just steals, for example, a few credit card numbers. This single round of the attack continuously repeats in what we refer to as the “Groundhog Day” attack.

3. Poisoning the Well

Despite the short lifespans of cloud-native resources, attackers can still find ways to get long-term persistence in your app. One way attackers can circumvent the ephemeral nature of serverless applications is by an upstream attack, or “Poisoning the Well.”

 

Cloud-native applications tend to comprise many modules and libraries with code from a variety of third-party sources. Attackers work to include malicious code in common projects. Then, after poisoning the well, the malicious code in your cloud apps can call home, get instructions, and wreak havoc.

4. Increased Time for Serverless Security Configuration

While this is not precisely a security “threat,” it is more a challenge and possible hindrance to your efforts to secure your serverless architecture.

 

Serverless conveys the benefit of increased application development velocity. Unfortunately, the traditional approach to security, where developers write code and package workloads, and security operations then puts security controls around those workloads, just will not work for serverless.

 

If developers must wait on security to open ports, IAM roles, or security groups for them, the benefit of increased velocity quickly erodes. Too often, the solution is to remove SecOps from the equation, which could indeed be a risk.

 

On the other hand, configuring permissions for the myriad serverless resources and interactions between them is a time consuming task. In addition ‘spending’ developers’ time on that security configuration can quickly get expensive, as well as being not the ideal use of their time. Leveraging automation, such as the CloudGuard Platform, can increase serverless security without devoting excessive amounts of developer time.

5. Increased Time for Security Processing

Another benefit of serverless is that you pay only for what you actually consume, which can result in reduced costs. Nevertheless, paying for precisely what you use means that any increases in processing time will increase costs.

 

Placing an excess of app sec configuration in your app could potentially add extra work to your functions, which can increase costs. While adding processing time for the sake of security is a wise investment, it requires proper implementation to avoid excessive, unnecessary cost increases.

 

Similar to the above Increased Time for Serverless Security Configuration, it is not exactly a threat but more a challenge you will have to tackle while securing your serverless architecture.

×
  Feedback
This website uses cookies for its functionality and for analytics and marketing purposes. By continuing to use this website, you agree to the use of cookies. For more information, please read our Cookies Notice.
OK