Archive for the 'Tor' Category

Run a private tor network using Docker

Jul 01 2016 Published by under Tor

I’ve made a scalable way of building a fully private functioning tor network using Docker. Why give any back story, if it’s useful to you, then here you go:

Source: https://github.com/antitree/private-tor-network

Docker Hub: https://hub.docker.com/r/antitree/private-tor/

Setup

All you really need to do is clone the git repo, build the image (or download from Docker Hub) and then spin up a network to your liking. What’s nice about this is you can use the docker-compose scale command to build any size network that you want. Eventually, when in the next version of Docker you’ll be able to scale across multiple hosting providers. But the current RC is too sketchy to invest any time in.

Anyways, here’s how to spin up 24 node network. That’s 15 exits, 5 relays, 1 client, and 3 dir authorities.

Using It

To use this, there is a port listening on 9050 (you can change this in the docker-compose.yml file). If you point your browser to the Docker host running your containers and, in the same way you would connect to Tor, use it as a SOCKS5 proxy server, you will suddenly use it.

If you aren’t sure if it’s working, you check out the logs

or

Or, if you want to get cool information, try setting up a depictor container that will give you data about the DA’s.

Now what?

To answer the “now what?” question, this is a clean way of getting a tor network running so you can do your research, learn about how it works, modify configurations, run some third party tor software… whatever. If you’re not into this, there is always Shadow and Chutney or just manually configuring hosts/processes your self.

Updates to Raspberry Bridge

Sep 21 2014 Published by under privacy,Raspberry Pi,Tor

I’ve updated the Raspberry Bridge build to 1.5.1Beta to update a few things and address a couple issues. The main changes are:

  • updated Tor to latest stable release
  • updated obfsproxy
  • updated OS including some security patches

Download Torrent: http://rbb.antitree.com/torrents/RBBlatest.torrent

More info: http://rbb.antitree.com/

Meek Protocol

Sep 07 2014 Published by under Censorship,Tor

The Meek Protocol has recently been getting a lot of attention since the Tor project made a few blog posts about it. Meek is a censorship evasion protocol that users a tactic called “domain fronting” to evade DPI-based censorship tactics. The idea is that using a CDN such as Google, Akamai, or Cloudflare, you can proxy connections (using the TLS SNI extension) so that if an adversary wanted to block or drop your connection, they would need to block connections to the CDN, like Google; mutually assured destruction. The goal being, a way of connecting to the Tor Network that is unblockable even from nation state adversaries.

SNI and Domain Fronting

SNI is a TLS extension that’s been around for about nine years, and has been implemented in all modern browsers at this point. This is the TLS version of virtual hosting where you send an HTTP request to a server, and inside is a request to another host. Similar to virtual hosting’s host headers, SNI provides a host inside it’s extension during the client hello request:

This would be a request to https://www.google.com but the server receiving this request would look up the record to www.antitree.com to see if it was fronted, and forward the request to that host.

You can try this using the actual Meek server that Tor uses:

You should get a response of “I’m just a happy little web server.” which is what the meek-server default response is.

In terms of Internet censorship, the idea of using SNI to proxy a request through a CDN is called Domain Fronting and AFAIK, is currently only implemented by the Meek Protocol. (That being said, the idea can apply to just about any other protocol or tool. I’ve seen other projects use Meek or something like it. ) What Meek provides is a way of using Domain Fronting to create a tunnel for any protocol that needs to be proxied.

Tor and Meek

The Meek Protocol was designed by some of the people involved with the Tor Project as one of the pluggable transports and is currently used to send the entire Tor protocol over a Meek tunnel. It does this using a little bit of infrastructure:

  • meek-client: This is what a client will use to initiate a tunnel over the Meek protocol
  • meek-server: corresponding server portion that will funnel requests and responses back over the Meek tunnel
  • web reflector: In its current form, this takes an SNI request, sees that it is a Meek request, and redirects it to the meek-server. This also makes sure that the tunnel is still running using polling requests.
  • CDN: the important cloud service that will be fronting the domain. The most common example is Google’s App-Spot.
  • Meek Browser Plugin: In order to make a meek-client request look like a standard SNI request (same TLS extensions) that your browser would make, a browser plugin is used.

Here’s a diagram of it all wrapped together:

meek

 

This is how just a request is made to a Tor Bridge Node that’s running the meek-server software. Right now, if you download the latest Alpha release of the Tor Browser Bundle, this is how you could optionally connect using Meek.

Polling

You might notice, that due to the fact HTTP (by design) doesn’t maintain any kind of state to keep a connection open for as long as you would like to tunnel your Tor traffic, the Meek protocol needs to compensate. It does this by implementing a polling method where a POST request is sent from the client to the server at a specified (algorithmic) interval. This is the main way that data is delivered once the connection has been established. If the server has something to send, it’s done in the POST response body, otherwise the message is still sent with a 0 byte body.

Success Rate

You might notice that there are a few extra hops in your circuit and it’s true that there is a decent amount of overhead, but for those in China, Iran, Egypt or the ever-expanding list of other nations implementing DPI based blocking as well as active probing, this is the difference between being able to use Tor, and not. The benefit here is that if you’re watching the connection, you’ll be able to see that a client IP made an HTTPS connection to a server IP owned by Google or Akamai. You cannot see if TLS handshake decide to support the SNI extension, and you cannot see whether or not the client HELLO contained a SNI “server_name” value. Without this, the connection is indistinguishable from a request to say Youtube or Google.

As of now, there does not seem to be a lot (compared to all Tor users) of users connecting over the Meek bridge but it does seem to be increasing in popularity.

userstats-bridge-transport[1]

Updated Graph

Attacks

While no known attacks exist (besides an adversary blocking the entire CDN), there are some potential weaknesses that are being reviewed. One of the interesting ones is if an adversary is able to inject a RST packet into the connection, the tunnel would collapse and not re-establish itself. This is unlike a normal HTTP/S request that would just re-issue the request, and not care. This may be a way of fingerprinting the connections over time but there would be a fairly large cost to other connections in order to perform an attack like this. The other attack of note is traffic correlation based on the polling interval. If the polling interval was static at, for example, 50ms, it would be fairly easy to define a pattern for the meek protocol over time. Of course that’s not the case in the current implementation as the polling interval dynamically changes. The other attacks and mitigations can be found on the Tor wiki page.

Resources:

https://trac.torproject.org/projects/tor/wiki/doc/meek – main wiki page documenting how to use Tor with Meek

https://trac.torproject.org/projects/tor/wiki/doc/AChildsGardenOfPluggableTransports#meek – in depth explanation of the protocol compared to a standard Tor connection

Raspberry Bridge Project

Jul 13 2014 Published by under Hardware,Raspberry Pi,Tor

Over at rbb.antitree.com, you’ll see the details of a new project of mine: To build a Raspberry Pi environment to make it easy for anyone to run a Tor Bridge node. The goal here has been to release an RBP image that is minimalist (in terms of storage consumption as well as resource consumption) and provides the necessary tools to run and maintain a Tor Bridge Node on a Raspberry Pi.

Bridges

A reminder, a Bridge Node is a type of Tor node (like relay, exit, entry) that is a way of evading censorship to join the Tor Network. This is done by secretly hosting bridges that are not shared with the public so there’s no way for a censoring tool to merely block all Tor nodes. On top of that, an Obfuscated Bridge is one that further defends against various fingerprinting attacks of the Tor protocol. With an obfuscated bridge, communications from the client to the bridge appear to be benign traffic rather than Tor traffic.

Challenge Installing Tor

It’s odd how less-than-simple the process of running a relay on a Pi is. If you want to run a relay on a RBP, some sites will merely say install Rasbpian and run apt-get install Tor. The problem with this is that the Debian repos are very far behind from the latest version of Tor (like at least one major revision behind). The logical conclusion would be to use the Tor Project’s debian repo’s then. The problem here is that there are no repos for Rasbperry Pi’s ARM architecture. One solution was to use something similar to the Launchpad PPA hosting that lets you run a simple repo to deliver a .deb package. But launchpad does not support ARM architecture (and doesn’t seem to plan to do so in the near future).

So the result is I’ve built a github repo that hosts the Tor .deb packages for the latest stable release. It’s not pretty, but it does the job and I know that it will work well. That was the first piece of the puzzle.

Host Hardening

The Raspberry Pi images out there are designed for people that want to learn programming in Scratch and play with GPIO pins for some kind of maker project. They’re not ideal for providing a secure operating environment. So I built a Debian-based image from the ground up, with the latest packages and only the required packages. I’ve customized the image to not log anything across reboots (mounting /var/log as a tmpfs). You can read most of the design of the OS here.

I’ve also secured SSH (which many of the Raspberry Pi images don’t do) by autogenerating SSH keys the first time it’s boot. The alternative is to ship an image that has the same SSH keys allowing MITM attacks. Again, these images are designed for makers.

Torpi-config

The part I spent the most time on, and is hopefully the most useful, is I took the structure of the raspi-config tool that is shipped with Raspbian, and I convirted it into a Tor configuration tool. This will give you a text-based wizard to guide users through configuring Tor, keeping obfsproxy up-to-date and perform basic systems administration on the device.

screen1[1]

Roadmap

It’s fully functional but there are a lot of things I’d like to improve upon. I’ve released it to solicit feedback and see how much more effort is necessary to get it where I want. Here are some of the other items on the roadmap:

  • Add the ability to update Tor to the latest stable release over github (securely)
  • Improve torpi-config to cover other use cases like configuring WiFi or a hidden service
  • Print out the specific ports that need to be forwarded through the router for the obfuscated bridge
  • Clean up some of the OS configuration stuff

 

 

XKeyScore

Jul 05 2014 Published by under Intelligence,OSINT,privacy,Tor

If you’re like me, you’re probably getting inundated with posts about how the latest revelations show that NSA specifically tracks Tor users and the privacy conscious. I wanted to provide some perspective of how XKeyscore fits into an overall surveillance system before jumping out of our collective pants. As I’ve written about before, the Intelligence Lifecycle (something that the NSA and other Five Eyes know all to well) consists more-or-less of these key phases: Identify, Collect, Process, Analyze, and Disseminate. Some of us are a bit up-in-arms, about Tor users specifically being targeted by the NSA, and while that’s a pretty safe conclusion, I don’t think it takes into account what the full system is really doing.

XKeyscore is part of the “Collect” and “Process”  phases of the life cycle where in this case they are collecting your habits and correlating it to an IP address. Greenwald and others will show evidence that the NSA’s goal is to, as they say “collect it all” but this isn’t a literal turn of phrase. It’s true there is a broad collection net, but the NSA is not collecting everything about you. At least not yet.  As of right now, the NSA’s collection initiatives lean more towards collecting quantifiable properties which have the highest reward and the lowest storage cost. That’s not as sexy of a phrase to repeat throughout your book tour though.

52164288[1] OR 52164332[1]

 

The conclusion may be (and it’s an obvious one) what you’re seeing of XKeyscore is a tiny fraction of the overall picture. Yes they are paying attention to people that are privacy conscious, yes they are targeting Tor users, yes they are paying attention to people that visit the Tor web page. But as the name implies, this may contribute to an overall “score” to make conclusions about whether you are a high value target or not. What other online habits do you have that they may be paying attention to. Do you have a reddit account subscribed to /r/anarchy or some other subreddit they would consider extremist. Tor users aren’t that special, but this section of the code is a great way to get people nervous.

As someone who has worked on a collection and analysis engine at one time, I can say that one of the first steps during the collection process is tagging useful information, and automatically removing useless information. In this case, tagging Tor users and dropping cat videos. It appears that XKeyscore is using a whitelist of properties to what they consider suspicious activity, which would then be passed on to the “Analysis” phase to help make automated conclusions. The analysis phase is where you get to make predictive conclusions about the properties you have collected so far.

intel_lifecycle_xkeyscore

Take the fact that your IP address uses Tor. Add it to a list of extremist subreddits you visit. Multiply it by the number of times you searched for the phrase “how to make a bomb” and now you’re thinking of what the analytics engine of the NSA would look like.

My point is this: If you were the NSA, why wouldn’t ‘you target the privacy aware? People doing “suspicious” (for some definition of the word) activities are going to use the same tools that a “normal” (some other definition) person would. We don’t have a good understanding of what happens to the information after it’s been gathered. We know that XKeyscore will log IP’s that have visited sites of interest or performed searches for “extremist” things like privacy tools. We know that there have been cases where someone’s online activities have been used in court cases. But can’t connect the dots.  XKeyscore is just the collection/processing phase and the analytic phase is what’s more important. I think the people of the Tor Project have a pretty decent perspective on this. Their responses have generally just re-iterated that this is exactly the threat model they’ve always planned for and they will keep working on ways to improve and protect its users.

 

 

Private Tor Network: Chutney

Apr 15 2014 Published by under Tor

Chutney is a tool designed to let you build your own private Tor network instance in minutes. It does so by using configuration templates of directory authorities, bridges, relays, clients, and a variety of combinations in between. It comes with a few examples to get you started. Executing this command, for example

will build a ten client network made up of three directory authorities, five relays, and two clients. Each of which have their own TORRC file, their own logs, and in the case of the relays and authorities, their own private keys. Starting them all up is as easy as

Capture

So bam! You’ve created a Tor network in less than a few minutes. One item to note, the configuration options in the latest version of Chutney use initiatives like “TestingV3AuthInitialVotingInterval” which require you to have a current version of Tor. In my case, I compiled Tor from source to make sure it included these options.

Building custom torrc files

It’s first job during configuration is to build a bunch of torrc files to your specification. If you tell it you’d like to have a bridge as one of your nodes, it will use the bridge template on file, to create a custom instance of a bridge. It will automatically name it with a four digit hexadecimal value, set the custom configuration options you’ve dictated, and place it into its own directory under net/nodes/.

The very useful part here is that it builds an environment that is already designed to connect between one-another. When you create a client, that client is configured to use the directory authorities that you’ve already built and that directory authority is configured to serve information about the relays you’ve built. The real value here is that if you were to do this manually, you’d be left to the time consuming process of manually configuring each of these configuration files on your own. Do-able, but prone to human errors.

Running multiples instances of Tor

Chutney builds a private Tor network instance by executing separate Tor processes on the same box. Each node is configured to specifically be hosted on a certain port, so for a ten node network, you can expect 20 more more ports being used by the processes.

CaptureUse case

Chutney is perfect for quickly spinning up a private Tor network to test the latest exploit, learn how the Tor software works, or just muck around with various configurations. Compared to Shadow which focuses more on simulating network events at a large scale, Chutney is more of a real-world emulator. One weakness is that it is not a one-to-one relationship between The Tor Network’s threat model and a Chutney configured one. For example, the entire network is running on a single box so exploiting one instance of Tor would most likely result in the entire network being compromised. That being said, depending on your needs, it offers a great way of  building a test environment at home with limited processing power. It’s still a rough-cut tool and it sounds like NickM is looking for contributors.

 

Cables Communication: An anonymous, secure, message exchange

Nov 17 2013 Published by under Crypto,privacy,Tor

Last night at Interlock Rochester, someone did a lightning talk on Liberte Linux — one of those anonymity Linux distros similar to TAILS and the like. Everything seemed pretty standard for an anonymity machine, all traffic was tunneled over Tor using iptables, only certain software was able to be installed, full disk encryption, memory wiping — But one thing stuck out, this service called “Cables.”

Cables Communication:

Cables (or Cable I really don’t know) is designed by a person that goes by the name, Maxim Kammerer. He is also the creator of Liberte Linux. Its purpose is to let user A communicate with user B, in an E-mail-like way, but with anonymity and security in mind. Before this, Tor users would use services like TorMail, until that was taken down. I don’t know if that was the inspiration for this new service, but it seems like it’s an attempt to fill that hole. 

liberte-logo-600px[1]

Overview:

Here’s a very simplified functional overview:

  1. User generates a Cables certificate which is a 8192 bit RSA X.509 certficate
  2. The fingerprint for that certificate is now that user’s username (like “antitree” is the username of “[email protected]”)
  3. A Tor hidden service is created and that is the domain of the address you’re sending (e.g xevdbqqblahblahlt.onion)
  4. This “Cables Address” is given to the trusted party you’d like to anonymously communicate with
  5. You setup a mail client like Claws, to handle communications to send these messages to the addresses
  6. Your email is saved into the Cables “queue” and is ready to be sent
  7. Cables then looks for the receipient’s Cables service, and lets it know it has a new message
  8. Once a handshake is complete, the message is sent to the user

So that’s a userland understand of it, but I’m glossing over a lot of important stuff like how Cables talks to another Cables service, what kind of encryption is being used, key exchange things… You know, the important stuff.

Cables messages are CMS

There are two important parts to understand about cables. The encrypted message format, and the way that it communicates information. Cables uses an implementation of the Cryptographic Message Syntax (CMS), which is an IETF standard for secure message communications. You can read more about how CMS is supposed to work here. In short, Cables messages are the CMS format implemented with X.509-based key management. My take-away from this is “Good – it’s not designing a new crypto.”

Communication Protocol

Although email is the most common example, Cables is not stuck in one way in which you communicate these messages – it’s transport independant. From what I see, it could be used securely share file to another user, as an instant message service, or exchanging information over some new communication protocol. This is why it fits nicely using Tor (and I2P) and a transport.

All communications are done via HTTP GET requests. This is a big part of understanding how this works. Cables comes with wrappers to help it feel more like a mail protocol, but all transmissions communicate over HTTP. For example, if you you look at the “cable-send” command in the /home/anon/bin path, you’ll notice it’s just a bash script wrapper for the “send” command. But that’s ALSO a bash script to interpret the mail message format and save it into a file path.

HTTP Server

This HTTP service is facilitated by the appropriately named “daemon” process which is the service shared via your .onion address. This service runs on local port 9080, and is served up on port 80 of the hidden service. So if you visit your hidden service address on that port, you receive a 200 ok response. But if you give it a URL like the one below, you can actually access files on the file system. There is a 1 to 1 translation between the files saved in the path /

certs

 

This web service responds to every request, but only certain requests deliver content. Here are the standard ones:

  • /{Username}/
    • certs/ca.pem
    • certs/verify.pem
    • queue/{message_id}
    • rqueue/{message_id}.key
    • request/

Most of these just serve up files sitting on the file system. /request/ initiates the service requests and starts the transfer.

Daemon push?

If you didn’t think so already, this is where it starts to get odd. This daemon regularly looks inside the queue folder (using inotify) for new messages to send. This is done on a random basis so that new messages are not sent out at the exact same time, each time. This is an attempt to prevent traffic fingerprinting  — an attack where someone is able to sniff your traffic and based on the traffic schedule, predict that you’re using Cables. In other words, when you go to send a secure message, what you’re doing is taking a message, encrypting it into the designated CMS specification, and plopping in a special place on your hard drive. Then the daemon service looks in that path and decides if you’re trying to send something.

Ok Crypto

I don’t yet have a grasp on the crypto besides understanding that it generates a certificate authority on first run which helps generate the rest of the keys used to sign, verify, and encrypt messages. It uses an implementation of Diffie-Helman for ephemeral keys and then there’s some magic in between the rest. 🙂 My first take on this is that its weakest points are not the cryptography, but the communication protocol.

Impressions So Far

If I’m scoping this project for potential attack vectors, I’d predict that there’s going to be something exploitable in the HTTP service implementation. That’s me being a mean, security guy, but I feel that’s going to have the lowest hanging fruit. This is mitigated by the fact that Tor hidden services are not something you can discovery, so even if there was an attack, the attacker would need to know your hidden service, and probably your username.

Although I keep mentioning that it’s aimed at being an email replacement, there’s one major difference which is that instead of having a dedicated SMTP server, messages are sent directly to the recipient which means that that user must have his box up and running. The default timeout for messages to fail is 7 days.

I think so far that CMS is a good way to go, but using GET PUSH style HTTP for transmission, might prove to be its eventual downfall. I’m not poo-pooing the project, but there are some challenging hurdles that it’s aiming to leap.

Malicious Exit Nodes: Judge Dredd or Anarchy?

Mar 29 2013 Published by under privacy,Tor

InspecTor is a .onion page that kept track of bad exit nodes on the network. And it did a pretty good job. It looked for things like:

  • SSL Stripping: Replacing HTTPS links with HTTP
  • JavaScript injection
  • iFrame injection
  • Exit nodes that have no exit policy (black holes)

Those are the easy to quantify bad properties. We can compare the results of connecting to a bad Exit Node and a good one and diff the results. These are some of the grey areas it also tries to look for:

  • Warning about similar nodes in the same netblock
  • Watch for similar named nodes spinning up hundreds of instances
  • Look at the names of the nodes and conclude that they’re bad (e.g. NSAFortMeade)

The worst case scenario for a service like this, is that first, they’re wrong and kick off a perfectly good Exit Node. Second, they make users use custom routes to evade the bad nodes. Doing so means that your network traffic has a fingerprint. “He’s the guy that never users Iranian exits” for example.

And that’s kind of what happened with InspecTor – now celebrating the anniversary of it’s retirement a year ago. He went Judge Dredd on Tor and started making broad conclusions on what nodes were evil. For example, he said that NSAFortMeade is obviously an Exit Node owned by the NSA assumedly to catch the traffic of Americans (because they can’t do that already?). Other conclusions stated that a family of Tor nodes were from Washington DC. One of them was malicious so the conclusion was that it was probably the Government keeping an eye on us.

Tor’s Controls

What does Tor have as a control mechanism if they do somehow come across a bad exit node? The protocol has a “bad-exit” flag in it so that authorities can let Tor users that this Exit-Node should be avoided. That flag is set by The Tor Project admins as far as I know and you have to be blatantly offensive to cause this to happen. Here is the _total_ list  of nodes that are blocked today:

agitator agitator.towiski.de [188.40.77.107] Directory Server Guard Server
Unnamed vz14796.eurodir.ru [46.30.42.154] Exit Server Guard Server
Unnamed vz14794.eurodir.ru [46.30.42.152] Exit Server Tor 0.2.3.25 on Linux
Unnamed vz14795.eurodir.ru [46.30.42.153] Exit Server Guard Server

http://torstatus.blutmagie.de/index.php?SR=FBadExit&SO=Desc

This says that there are four bad nodes (one’s a bad directory server) on the network right now. I think most people would agree that is a bit low. You can take a look at this link for a complete list of the nodes they’ve blocked in the past. You should notice that a bad-exit flag doesn’t kick them off the network, it just tells the client to never use them as an exit. So these nodes can stay online as long as they want but they’ll never be used.

The Point

The point is not to just say everything sucks. How Tor isn’t doing a good job at monitoring for Exit Nodes or how InspecTor was doing too good of a job for it’s own good. It’s to highlight the real-world problem in Tor. Unlike the sexy theoretical attacks we like to wrap our heads around like global adversaries correlating your traffic back to an individual IP by statistically analyzing your web history patterns, the most likely thing to happen to you is that some douche nuckle is running dsniff and ulogd. And the point is also to highlight a need for a replacement of Snakes On A Tor. You can tell by it’s name, it’s a bit outdated. That is something actively being worked on but it may be a while before something reliable comes out of it.

 

How Tor does DNS: The Breaking Bad Way

Feb 22 2013 Published by under privacy,Tor

Let me start by answering the short version of the question: Tor usually performs DNS requests using the exit node’s DNS server. Because Tor is TCP, it will only be able to handle TCP based DNS requests normally. Hidden services though are very different and rely on Hidden Service Directory Servers that do not use DNS at all. Read on if you don’t believe me or want more information.

Here’s a reference from an old mailing list entry:

Section 6.2 of the tor-spec.txt[5] outlines the method for connecting to a specific host by name. Specifically, the Tor client creates a RELAY_BEGIN cell that includes the DNS host name. This is transported to the edge of a given circuit. The exit node at the end of the circuit does all of the heavy lifting, it performs the name resolution directly with the exit node’s system resolver. …For the purposes of DNS, it’s important to note that a node does not need to be marked as an exit in the network consensus to perform resolution services on behalf of a client. Any node that doesn’t have an exit policy of ‘reject *:*’ may be used for DNS resolution purposes. [1]

Pudding for the Proof:


Don’t believe me? Let’s test it out. If I run an exit node and then try to use it for a circuit, my DNS requests should go through it right? I’ve spun up an exit node named “BrianCranston” and I’ll setup a client (who I’m calling Aaron Paul)  to only use this box as it’s exit node. You can do this by adding the following to your TORRC file:

And on my exit node, I do a tcpdump of all traffic on port 53. On my client I start looking for BrianCranston’s websites at briancranston.com and briancranston.org. Lets see what it looks like:

exit6

You’ll notice that Tor is being a little cheeky with the way it resolves DNS records.  briancranston.org turns into bRiancRAnsTON.org. I don’t know if this is just a nice way to let Exit Node operators know which hosts are being resolved by Tor or what.  The reason for the odd casing in the DNS response is actually thanks to the Dan Kaminskys. Back in 2008, when exploiting DNS flaws was all the rage, one of the remediations for a DNS cache poisoning attack, was to randomize the casing of a DNS request. This was called the “0x20 hack” because if you look at an ASCII table of letters, you’ll notice that the difference between the letter A and the letter a, is 0x20. For example “a” is 0x61, and “A” is 0x41. The point here is that if someone wanted to attempt to pull off a cache poisoning attack, they’d have to brute force the possible case combinations. I’m told that modern browsers have this function built in now, but back in 2008, Tor was on the bleeding edge and its stayed in because it really doesn’t matter if a DNS query is 0x20’d multiple times. Thanks to Isis for pointing this out.

UDP:

Billy Mays here. Tor is a TCP only network so what happens when it you need to use UDP services like DNS? The answer is pretty simple, it just doesn’t do it. Colin Mulliner and came up with a solution to this which was to relay UDP based DNS requests using a tool he wrote called TTDNS.(If you’ve ever used TAILS, this is what it uses.) In short, it takes a UDP based DNS query, converts it to TCP, sends it out over Tor, and converts the reply back to UDP once it’s been received.

Tor doesn’t natively support UDP based DNS queries, but Tor also only does two types of DNS queries: A records, and PTR records. It skips around needing to use CNAME by converting them to A records but officially, those are the only two supported.

Tools:

There are a couple of other items to note related to DNS. One is that there is a built-in tool called “tor-resolve.” Guess what it does… make DNS queries over the Tor network. This is useful for command-line scripts that are trying to resolve a host.

The other, is a TORRC option that will open up a port to provide DNS resolution over Tor. Once enabled, you can use the local host as a DNS resolver on the port you specify. Again, this is how TAILS handles DNS resolution.

What about Hidden services?

This is fine and dandy to resolve google.com, but what about a hidden service with a .onion address. Connecting to google.com goes out an Exit Node, but connecting to an .onion address never leaves the Tor network. In fact, Tor doesn’t even use DNS to resolve .onion addresses at all. Here’s how that works.

The names generated for .onion addresses are not just random values unique to your host, they are a base32 encoded version of the public key associated to your hidden service. When you create a hidden service, you generate a priv/pub key pair. This .onion address, the port it’s listening on, some other useful classifiers, and a list of “rendezvous points” are published to Tor hidden service directory nodes. The rendezvous points are locations on the Tor network where a client can initiate a connection to the hidden service.

So, following our Breaking Bad theme, if we had Brian (Cranston) and Aaron (Paul) wanted to exchange a secret web page that keeps track of all the meth they’ve sold, this is what the flow looks like:

  1. Brian modifies the TORRC to offer a service on an IP address and port (127.0.0.1:443)
  2. Brian creates a keypair for the service and the .onion address is saved (briancranston.onion)
  3. Brian’s Tor client sends a RELAY_COMMAND_ESTABLISH_INTRO to start creating introduction points
  4. Brian’s client sends the descriptors (introduction points, port, etc) to the Hidden Service Directory Servers
  5. Brian then sends Aaron his .onion address (briancranson.onion)
  6. Aaron’s client checks the Hidden Service Directory Server to see if the address exists
  7. Aaron’s Tor client makes a circuit to one of the introduction points
  8. Aaron connects to the introduction point and tells it about his rendezvous point.
  9. This rendezvous point is passed to Brian
  10. Brian connects to Aaron’s rendezvous point
  11. The rendezvous point lets Aaron know that Brian’s service has been forwarded at that point
  12. Aaron finally makes a connection to Brian’s service

EDIT: Modified to differential between “rendezvous” and “introduction points” in the steps. Thanks to Isis for pointing that out.

So in short, hidden services are resolved using Hidden Server Directory Servers and the Tor client. There currently is no way (AFAIK) to manually just resolve onion addresses. That means, if you’re trying to connect to a hidden service using a script, you’ll have to properly tunnel the requests through Tor. That’ll be for another day. torba

If you need more information, check out these links:

http://archives.seul.org/or/talk/Jul-2010/msg00007.html – old mailing list message about DNS. A bit out dated but very useful

https://gitweb.torproject.org/torspec.git?a=blob_plain;hb=HEAD;f=rend-spec.txt – discusses the rendezvous protocol specification that is the basis of hidden services.

Using the Good Of Panopticlick For Evil

Jan 25 2013 Published by under OPSEC,privacy,Tor

Browser fingerprint tactics, like the ones demonstrated in Panopticlick have been used by marketing and website analytic types for years. It’s how they track a user’s activities across domains. Just include their piece of JavaScript at the bottom of your page and poof, you’re able to track visitors in a variety of ways.

I don’t care much about using this technology for marketing, but I do care about using this type of activity for operational security purposes. Imagine using this technique as a counter-intelligence tactic. You don’t want to prevent someone from accessing information, but you do want to know who is doing it, especially if they have ill intentions in mind. IP addresses are adorable but hardly reliable when it comes to anyone that knows how to use a proxy, so using a fingerprint application, like Panopticlick, we can see who is visiting the site no matter what their locations appears to be.

Here’s a simple way of using Panopticlick’s JavaScript for your own purposes to gather fingerprint information about your browser. I’ll leave it up to you to figure out what you can do with this.

“More Worser”

Panopticlick’s informatino gathering techniques are very similar (see the same) as Browserspy except that they correlate the results to a dataset. If you really wanted to do all the browser fingerprinting without any of the reporting, you can take a look at the BrowserSpy code.

I’ve also worked on a technique years ago that attempts to verify your IP address using DNS. This was a pretty good technique especially for third party plugins like Flash and Java which were inconsistent when it comes to using proxies correctly. For more information about using DNS to extract an IP address and further gather information about a user, check out HD Moore’s now decommissioned Decloak project.

 

Next »