The microservices architecture decouples applications into multiple independent services that communicate with each other over defined interfaces. It is a popular approach for deploying cloud-native applications, as it allows you to scale each piece individually—without affecting other components.
Traditional monolithic applications are often difficult to operate. This is because their logical components operate as a single unit, with just one service providing your application’s entire functionality—from authentication to customer payments. Separating these elements into two microservices allows for more efficient resource utilization when there are many users logging in but relatively few payments are being made. This enables you to scale up your authentication service without changing the payment platform.
While operating your application with microservices brings increased flexibility, it also comes with unique security requirements: Multiple components mean multiple assets to protect. But securing your microservices doesn’t have to be difficult, if you take a proactive approach based on best practices.
The importance of microservice security
Microservices are self-contained applications that collectively assemble your system’s user-facing functionality. Without microservices, systems are implemented as a single monolithic component that is responsible for every feature. With a microservices architecture, however, you deploy components individually, make them responsible for particular subsystems, and introduce interfaced networking so services can contact each other. Your web application then calls your authentication service, payments platform, and other components using the APIs those microservices provide.
Multiple entrypoints
The microservices model creates multiple possible entrypoints to your system, each of which could expose vulnerabilities. More routes for attackers to exploit means a larger attack surface. Securing your system, therefore, requires protecting the individual microservices and their interconnections. Failing to do so would mean an attack against one route (e.g., your web API or authentication service) could permit lateral movement to other, unrelated services.
Mass scale
Another microservices security challenge is the scale that distributed systems can reach. In reality, most systems will have additional services beyond just the web API, authentication provider, and payment platform mentioned previously. As such, you need to introduce as many service splits as possible in order to truly reap the benefits of microservices architecture.
Today, it’s not uncommon for systems to comprise hundreds of individual components, each deployed as a standalone microservice. Having so many moving parts means greater opportunity for errors. It is therefore imperative that each service be properly secured.
How to secure microservices
When it comes to securing a microservice, there are four fundamental areas to consider:
- Access to microservices: Microservices should be isolated from external networks, unless specifically intended for end-user access. Even if your API, website, and CDN need to be open, public users shouldn’t be able to reach your payment service or database host directly.
- Access between microservices: Blocking access between microservices is just as important, as this lessens the blast radius if one service is compromised. Services that don’t call each other should be physically isolated. That way, if an attacker does gain access to your authentication layer, they won’t be able to ping your payment system.
- Microservice container contents: Any efforts spent hardening your microservices architecture will be futile if the workloads inside your containers harbor vulnerabilities. Regularly auditing package lists, using automated dependency scanning tools, and keeping up to date with container security best practices will help to mitigate the risk of container compromise.
- Deployment environment: The security of your microservices is dependent on your overall security hygiene and how secure your deployment environment is. Mission-critical workloads should only be deployed to robust clouds that meet the security and compliance standards you expect. Don’t overlook your own responsibility either: Protecting your accounts with multi-factor authentication, setting up role-based access control, and regularly rotating keys and certificates are also key with the microservices model.
There’s no single way to harden your microservices or to guarantee your apps are protected. But focusing on the four principles discussed below will make it easier to spot potential vulnerabilities early on.
1. Control access to microservices (north-south)
In most cases, the microservices you create will be internal-facing ones that power specific components of your system. These shouldn’t be accessible outside your Kubernetes cluster or cloud environment. Blocking access to all but essential public services will reduce the likelihood of vulnerabilities in service APIs being found and exploited.
This layer of protection, referred to as “north-south” security, secures the perimeter of your services by creating a barrier that separates your system from the public network. While effective north-south protection measures prevent attackers from penetrating your system, they shouldn’t be your only line of defense.
Using API gateways
API gateways are a reliable means for implementing north-south security, with the gateway placed between your users and the backend services it protects.
Using a gateway correctly requires configuring your networking so that the gateway fields all external requests. The software will then evaluate each request using policies that you set. These policies will determine whether access should be granted and then route the traffic to the correct backend service.
A reverse proxy that forwards traffic to whitelisted endpoints is the simplest type of gateway. You can use popular software such as NGINX to set one up. Kubernetes ingress controllers also count as gateways, because they automatically route traffic to the specific services you choose.
Placing all your services behind a consistent gateway provides centralized monitoring and clear visibility into publicly accessible endpoints; whereas, not using a gateway would be risky, since you might unintentionally expose services that should be private.
Implementing rate limiting
Rate limiting defends against misuse of your services. Attackers may attempt to overwhelm your infrastructure using brute force methods, such as credentials or common endpoints like /auth and /admin. Similarly, denial-of-service attacks occur when malicious actors send an overwhelming number of requests, thus preventing legitimate traffic from being handled.
Rate limiting provides protection against both attack methods by tracking the number of requests made by each client IP address. Sending too many requests in a defined period of time will cause subsequent requests to be dropped for that IP. The overhead of handling the rate-limit check is far less than it would be if the request were allowed to reach its destination server unimpeded.
API gateways and rate limiting thus complement each other. Applying rate limiting within your API gateway guarantees it will be applied globally, before traffic hits your microservice endpoints. Rate limiting can also be directly incorporated into specific services for fine-grained configuration and enhanced interservice protection.
2. Control internal communications between microservices (east-west)
While north-south security secures your perimeter, the east-west plane deals with traffic flowing between your services. Each component should be isolated, without the ability to connect with or discover other services, even when they are deployed adjacently.
You can create exceptions to facilitate your application’s legitimate interservice communications by designating the components that can call a service (e.g., an invoice generator that makes requests to your payment layer). This “blocked-by-default,” “enabled-on-demand” model forces you to be intentional when opening up interfaces.
Allowing unrelated services to access each other opens pathways for threat actors to move through your system. Permitting only the bare minimum number of connections to each component, therefore, will slow attackers down and mitigate any damages. If malicious actors do manage to compromise a low-risk system, they shouldn’t have the opportunity to stage threats against more sensitive ones.
East-west security measures relate to how your microservices are individually isolated and connected. These protections make it harder for an attack against one service to spread to other services. East-west also helps you achieve a robust zero-trust security model by acknowledging the potential fallibility of the API gateway, as well as the risks that individual services pose to each other.
Protecting services with authorization
Service-level authorization permits implementation of specific access control policies for each application in your stack. This can take on several different forms, including both centralized and decentralized policy management approaches.
Decentralized vs. centralized authorization
Decentralized authorization incorporates policy decisions and enforcement into the code of your microservices. Each service defines the rules, attributes, and enforcement checks it needs to verify whether a particular request is authorized to proceed. This approach is ideal when your services have highly specific authorization requirements.
Centralized authorization models place your policies and their evaluation routines within a separate repository that microservices can interact with. The code within your microservice communicates with the authorization system using an API it provides. The microservice will need to supply the user’s identity and any attributes relevant to the target endpoint. The authorization provider assesses the information and produces an “accept” or “reject” response.
Centralization is not as flexible, but it can be easier to set up and maintain due to there being one central place to set up authorization policies, implement their enforcement routines, and register user associations. This works well with identity federation solutions such as OAuth and SAML.
Authorization tokens
Microservices usually authenticate to each other by including a signed authorization token in their HTTP requests. Each token should include the identity of the calling service and the permissions it has been granted. Standards such as JSON Web Tokens (JWTs) allow the recipient service to verify whether the token is genuine.
Tokens can be difficult to deploy across many services, however. Mutual TLS (mTLS) is an alternative method that works at the transport level. With mTLS, microservices are assigned their own public/private key pairs that allow them to authenticate to other services over the mTLS protocol. This helps keep network communications secret while facilitating built-in authorization. However, you will still need to issue certificates to each service before mTLS authorization can be verified.
Enable transport-level security
Communications between services must also be secured at the network level. TLS can be used to encrypt your cluster’s traffic, prevent eavesdropping, and verify the identity of callers.
TLS can be enabled for microservices deployed to Kubernetes by ensuring all traffic flows through Services that are protected by your own certificates. The cert-manager operator is the easiest way to automate TLS configuration in your cluster. It lets you provision new certificates by creating Certificate objects. Certificate is a custom resource definition included with the operator.
Dedicated service meshes like Istio make it even easier to network many different services securely. These complement container orchestrators such as Kubernetes, offering improved support for traffic management, authorization, and interservice security.
Implementing Kubernetes networking policies
Networking policies are a specific Kubernetes tool for implementing east-west security. You can set per-pod criteria that define which other pods are allowed to communicate with the target.
This simple policy stipulates that pods with the component=payment label can only be accessed by other pods with the component=api label:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: payment-access
spec:
podSelector:
matchLabels:
component: payment
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
component: api
egress:
- to:
- podSelector:
matchLabels:
component: api
The ingress and egress fields can set up separate behavior for inbound and outbound traffic, respectively. Network policies enable fine-tuned control of traffic flows. This next example permits ingress traffic from pods in a specific IP address range:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: payment-access
spec:
podSelector:
matchLabels:
component: payment
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
Network policies are therefore one of the primary ways of implementing east-west security in Kubernetes clusters.
3. Audit your microservices before deployment
Security is only as good as the weakest link in the chain. A backdoor in a package used by one of your services could be exploited to allow movement into other services.
Thoroughly auditing your containers before you deploy them into your environment will help mitigate these risks. In addition, using hardened base images or assembling your own from scratch will help to ensure there is nothing dangerous lurking within.
Automated security testing techniques like DAST, SAST, and IAST can be used to detect possible flaws in your code. Likewise, vulnerability scanners can help to identify redundant and outdated packages in your container images. Modern scanners cover both the dependencies used by your code and the OS libraries installed with system package managers.
Adopting a secure-by-design mindset means that developers, operators, and project managers alike must prioritize security and incorporate design changes to address any weaknesses. In keeping with this approach, you should harden your microservices as you create them.
Microservices are dynamic, complex, and assemble large-scale systems. It is therefore important to plan ahead and establish strong container-level security as a first line of defense.
Scanning container images
Third-party packages included as software dependencies may be outdated or harbor zero-day vulnerabilities and CVEs. You can identify these risks before you deploy by scanning your container images with tools such as Trivy:
$ trivy image ubuntu:latest
The command produces a list of vulnerabilities found in the image’s OS packages and source-code dependencies:
ubuntu:latest (ubuntu 22.04)
============================
Total: 16 (UNKNOWN: 0, LOW: 12, MEDIUM: 4, HIGH: 0, CRITICAL: 0)
┌──────────────┬────────────────┬──────────┬──────────────────────────┬───────────────┬──────────────────────────────────────────────────────────────┐
│ Library │ Vulnerability │ Severity │ Installed Version │ Fixed Version │ Title │
├──────────────┼────────────────┼──────────┼──────────────────────────┼───────────────┼──────────────────────────────────────────────────────────────┤
│ bash │ CVE-2022-3715 │ MEDIUM │ 5.1-6ubuntu1 │ │ bash: a heap-buffer-overflow in valid_parameter_transform │
│ │ │ │ │ │ https://avd.aquasec.com/nvd/cve-2022-3715 │
├──────────────┼────────────────┼──────────┼──────────────────────────┼───────────────┼──────────────────────────────────────────────────────────────┤
│ coreutils │ CVE-2016-2781 │ LOW │ 8.32-4.1ubuntu1 │ │ coreutils: Non-privileged session can escape to the parent │
│ │ │ │ │ │ session in chroot │
│ │ │ │ │ │ https://avd.aquasec.com/nvd/cve-2016-2781 │
├──────────────┼────────────────┤ ├──────────────────────────┼───────────────┼──────────────────────────────────────────────────────────────┤
│ gpgv │ CVE-2022-3219 │ │ 2.2.27-3ubuntu2.1 │ │ gnupg: denial of service issue (resource consumption) using │
│ │ │ │ │ │ compressed packets │
...
Not every finding will necessarily be applicable to the context your services are deployed in. But addressing as many problems as possible before deployment—by updating affected packages or selecting safer alternatives—will reduce the risk of compromised code reaching production.
Integrating security by design
Security is shifted left by making it an integral part of your strategy—from design to code and operations. As project managers scope out services and developers start to build them, both teams must anticipate new security risks before they’re introduced.. When you’re running hundreds of loosely coupled services, the legacy model of security as an afterthought simply won’t cut it.
Risk-based security is another effective strategy for navigating the current threat environment. Automated scans and regular developer reviews can help to identify risks. Data from new risks is then combined with older insights to determine priorities. This allows for faster remediation of critical threats and eliminates alert fatigue.
Developers also play a pivotal role in microservices security. Following secure coding practices helps to prevent security flaws and reduces the time to remediation. Developer education and training, what-if analysis, and a comprehensive security test suite will also help to ensure a more secure software development lifecycle.
4. Harden your cloud environment
Hardening the environment that hosts your deployment is the final element of microservices security. Basic cloud security hygiene measures (e.g., limiting user privileges and regularly rotating access tokens) are vital, but there are also specific best practices for distributed systems like Kubernetes.
Setting up Kubernetes RBAC
The Kubernetes RBAC system should be used to configure access for each user and service account in your cluster. Restricting accounts to the bare minimum privileges their functions demand will also improve your security posture. In addition to mitigating the risk in case credentials are lost or stolen, this also provides protection should service account tokens be compromised by breaches of third-party systems they have been distributed to.
RBAC is configured by creating Role objects that each permit a set of actions in your cluster. Roles are bound to users and service accounts by RoleBinding objects. Role and RoleBinding relate to namespaced resources; the similar ClusterRole and ClusterRoleBinding deal with interactions with cluster-level functions.
Below is a sample role that permits listing and creating pods:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: demo-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create"]
Grant the role to a user called demo-user by adding the following RoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-role-binding
subjects:
- kind: User
name: demo-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: demo-role
apiGroup: rbac.authorization.k8s.io
The demo-user user is now permitted to run these commands:
kubectl run nginx --name nginx:latest
kubectl get pod demo-pod
kubectl get pods
Other commands, such as kubectl delete pod demo-pod, will result in an authentication error. This prevents users and service accounts from carrying out actions they have no legitimate reason to perform.
Encrypting secrets
The standard Kubernetes distribution doesn’t encrypt Secrets data at rest. Instead, sensitive passwords, API keys, and the certificates you store within Secret objects are kept in plain text within the cluster’s etcd instance. This could potentially allow for easy exfiltration by attackers.
You can set up encryption for this data by configuring the Kubernetes API server to use one of the available encryption providers. This relies on an EncryptionConfiguration object that defines the encryption standard and secret key to use.
First, create your EncryptionConfiguration manifest. This example uses the aesgcm provider:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aesgcm:
keys:
- name: secret-key
secret: <YOUR_SECRET_HERE>
To generate your secret, run the following command:
$ head -c 32 /dev/urandom | base64
Next, you need to modify the manifest of your kube-apiserver Pod so that it uses your new encryption configuration upon startup. For clusters that are managed by Kubeadm, the Pod’s manifest is available at: /etc/kubernetes/manifests/kube-apiserver.yaml. Find the spec.containers.command field and append the encryption-provider-config flag to reference the path to your EncryptionConfiguration manifest:
...
spec:
containers
...
- command:
- kube-apiserver
...
- --encryption-provider-config=/path/to/encryptionconfiguration.yaml
...
Restart the API server after making this change. Newly stored secrets will now be encrypted, but existing values will not be affected. You can encrypt secrets you have already created by iterating over them and updating their values. Use the following Kubectl command:
$ kubectl get secrets –all-namespaces -o json | kubectl replace -f –
Scanning your cloud
Cloud security scanners offer a convenient way to detect misconfigurations and security weaknesses. Regularly using tools such as Amazon Inspector, Oracle OCI’s vulnerability scanner, or Google Cloud’s embedded scanner can alert you to vulnerabilities in your deployments and your management infrastructure.
While even a positive result doesn’t guarantee full protection, these tools can help you efficiently uncover improvement opportunities. For example, Amazon Inspector continually scans your AWS workloads for known vulnerabilities, probing your virtual machines, containers, networking rules, and other assets to identify threats in real-time. It then issues a security risk score that keeps you informed of your security posture. The other cloud providers offer similar capabilities in their own tools. These methods give you quick and accurate results without any manual intervention.
Secure microservices and deploy with confidence
Microservices are an essential part of modern cloud-native development. But with so many components forming an intricate web of connections, securing your microservices can be a daunting task. A single vulnerability in your service networking, container packages, or cloud environment could be exploited to chain together a bigger attack.
While an effective microservices architecture can facilitate development and deployment, even the best implementations can contain security weaknesses. But implementing the techniques covered in this series to harden your environment will allow you to deploy services with confidence.
The Vulcan Cyber® risk management platform enables you to prioritize and remediate vulnerabilities seamlessly, so you can manage risk at scale. Get your free trial today, and start owning your risk.
FAQs
Are microservices more secure?
While the size and complexity of an application determine its risk profile, oftentimes a microservices architecture is more secure than a monolithic application. Since microservices are decoupled, vulnerabilities are often limited to a particular component rather than impacting the overall application. While there are clear benefits, microservices can also introduce new security challenges, such as increased complexity, inter-service communication vulnerabilities, and the need for more rigorous testing and monitoring. Therefore, to maximize the security benefits of microservices, it is important to design and implement them with security in mind, and to use appropriate security measures and protocols to protect against potential threats.
When should you not use microservices?
It’s important to carefully consider the requirements of the application and the capabilities of the development team before deciding whether to use microservices architecture. While microservices can offer significant benefits in the right circumstances, it is not a one-size-fits-all solution and may not be appropriate for every application.
Here are some scenarios where microservices may not be suitable:
-
Simple applications: Microservices architecture may be too complex for simple applications that don’t require a high level of scalability or modularity. In such cases, a monolithic architecture may be sufficient.
-
Tight coupling between components: If the components of an application are tightly coupled and dependent on each other, it may be difficult to separate them into microservices without causing major changes to the application’s design.
-
Limited resources: Implementing microservices architecture requires additional resources, including infrastructure, development, and operational resources. If resources are limited, it may not be feasible to implement and manage microservices.
-
Low-traffic applications: If an application does not receive a high volume of traffic, the benefits of microservices, such as scalability and fault tolerance, may not be necessary.
-
Organizational constraints: Implementing microservices architecture requires significant organizational changes, including changes to development, testing, deployment, and operations processes. If an organization is not prepared to make these changes, it may not be suitable for microservices.