Continuation from previous post: 1 Waterfilling: Balancing The Tor Network With Maximum Diversity This paper is proposing a new tor circuit path selection algorithm that makes bigger nodes run middle/relay traffic more often and smaller nodes more become exits exits. Apparently the talk included an abridged history of tor’s path selection: 2003: Uniform at random 2004: Introduce bandwidth weighting for performance 2005: add Guards based on Helper nodes 2010: add bandwidth weights to map node capacity into probability of use in different positions (guard, middle, exit) The main goal of this new algorithm is to make very large tor servers (which are a higher risk of being used in a traffic correlation attack because they serve a higher percentage of tor clients) serve more relay traffic, and less guard or exit traffic. Because, if you are going to correlate the traffic of a tor user, the most likely way to do this would be to correlate the traffic at the guard and at the exit and if you had a limited budget, why not target the biggest servers. Making much smaller nodes handle more exit traffic, it’s harder for an adversary to pull off an attack like this. In other words they would have to monitor a lot more systems in (hopefully) geographically disparate and difficult-to-access areas.
The annual Privacy Enhancing Technologies Symposium (PETS) 2017 is a privacy nerd’s dream and has always been on my list to attend. Unfortunately, I did not make it out to Minnesota to attend but all the papers are readily available online so yay, open access! These are my notes about some interesting research presented this year based on the papers that were released and the live tweets that Nick Mathewson was doing during the event.
Summary This blog post is going to show you how to go from exploiting a single container to gaining root on an entire cluster and all nodes. This is caused by a default flaw in the way Kubernetes manages containers. I’m doing a lot more container work at my day job – looking for container breakouts, container infastructure review, and orchestration technologies. I’ve been involved in a few Kubernetes reviews and talked with others in the company about it and there’s one vulnerability that seems to make it into almost every report and yet no one thinks it’s as important as the security folks. So I want to start a dialog.
If you’re like me and want to stand up a quick server that can response on all ports, here’s a quick way to do it. You’ll need a ton of memory to pull this off so setup your machine or VM accordingly. This works for nginx but you’ll have to go through some of the same steps for other services. Linux Ulimits Check current ulimits, hard limits, and soft limits on your current account: ulimit -n ulimit -Hn ulimit -Sn
UPDATE: The source repository for all this code is hosted here: https://github.com/antitree/bsidesroc2017ctf Check out the previous 1, 2, 3 and 4 for the other CTF challenges. Rebound Attack I admit this this was the most complex one which is why it was worth 500. The idea is I want you to exploit yourself in very specific ways. This is adapting a research project from years ago where I fingerprint people based on the DNS requets they make. Here’s how it works:
Check out the previous 1, 2, and 3 for the other CTF challenges. Hop Till You Drop The original plan for this one was to show how you can setup an exit node to allow single hop circuits – in other words, you don’t create a full 3 hop circuit on tor but just use the exit node as the one and only proxy. This is normally banned unless you allow it both at the exit and on the client.
This is a continuation of the previous posts talking about BSidesROC onion related CTF challenges. Port of Onion (PoO) I don’t think anyone got this one mostly because I think they were expecting that it was going to take too long. Here’s the clue: Sail with me on a 3 hour cruise A storm hits us hard but we must not lose Take any port in a storm Just to get some place warm There's only one there; which do you choose? bsidesrocxehooxr.onion Most picked up that my terrible poem was to try and guess which port the service was running on. I thought this would be an interesting challenge to either try to adapt a port scanner to use a socks proxy or script it. I think people assumed that they’d have to scan 65535 ports but it was hosted on port 1080/TCP which is above 1024 I know but I was hoping that it would be a common port that a scanner would hit.
This is a continuation of the previous post talking about BSidesROC onion related CTF challenges. Double Ontonion One team figured this one out. The point of this challenge is to exemplify a common problem with onion services. Basically, if you don’t configure the web server correctly, there are cases where an onion service might leak additional information about the host. For example, if you were hosting an onion web service on the same server as another web service, you could sometimes replace the Host header with something like “localhost” and have crushing results.
Now that BSidesROC is over and the CTF is closed, I can share some of the details about the Onions CTF category that I made. I think the feedback was that a lot of the challenges were too hard or they were straight-forward but they took too long to do. Setup Each of the services in the Onions category contained a vanity BSidesROC onion address. This was thanks to my friend who threw some GPU cycles at generating keys for services that either start with or end with “bsidesroc”.
Our little hacker conference that usually draws about 400 people is happening again on 4-21 and 4-22. If you want the song and dance about all the things we have planned, you can check out the website. I want to cover all the internal changes. Volunteers We’re getting old. What can I tell you. The longer you run something like BSidesROC (and Interlock and 2600 for that matter) the more likely your core people are going to have different priorities and interests. I think that if any organization wants to keep itself going, it should plan to phase out its core organizers. I’ve always had this plan for Interlock Rochester and for BSidesROC. This year we can really see those changes. People aren’t able to put in the time they once were able to and BSidesROC isn’t a priority. And that’s ok. Others pick up the slacki and I actually think this year is been running the smoothest… dare I say… ever.