Archive for the 'privacy' Category

Browser fingerprinting attack and defense with PhantomJS

May 18 2015 Published by under Censorship,Intelligence,privacy

PhantomJS is a headless browser that when you use Selenium, turns into a powerful, scriptable tool for scraping or automated web testing in even JavaScript heavy applications. We’ve known that browsers are being fingerprinted and used for identifying individual visits on a website for a long time. This technology is a common feature of your web analytics tools. They want to know as much as possible about their users so why not collect identifying information.

Attack (or active defense)

The scenario here is you, as a privacy conscious Internet user, have taken the various steps to hide your IP, maybe using Tor or a VPN service, and you’ve changed the default UserAgent string in your browser but by using your browser and visiting similar pages across different IP’s, the web site can track your activities even when your IP changes. Say for instance you go on Reddit and you have your same 5 subreddits that you look at. You switch IP’s all the time but because your browser is so individualistic, they can track your visits across multiple sessions.

Lots of interesting companies are jumping on this not only for web analytics, but from the security point of view. The now Juniper owned company, Mykonos, built it’s business around this idea. It would fingerprint individual users, and if one of them launched an attack, they’d be able to track them across multiple sessions or IP’s by fingerprinting those browsers. They call this an active defense tactic because they are actively collecting information about you and defending the web application.

The best proof-of-concepts I know of are BrowserSpy.dk and the EFF’s Panopticlick project. These sites show what kind of passive information can be collected from your browser and used to connect you to an individual browsing session.

Defense

The defense to these fingerprinting attacks are in a lot of cases to disable JavaScript. But as the Tor Project accepts, disabling JavaScript in itself is a fingerprintable property. The Tor Browser has been working on this problem for years; it’s a difficult game. If you look through BrowserSpy’s library of examples, there are common and tough to fight POC’s. One is to read the fonts installed on your computer. If you’ve ever installed that custom cute font, it suddenly makes your browser exponentially more identifiable. One of my favorites is the screen resolution; This doesn’t refer to window size which is separate, this means the resolution of your monitor or screen. Unfortunately, in the standard browser there’s no way to control this beyond running your system as a different resolution. You might say this isn’t that big of a deal because you’re running at 1980×1080 but think about mobile devices which have model-specific resolutions that could tell an attacker the exact make and model of your phone.

PhantomJS

There’s no fix. But like all fix-less things, it’s fun to at least try. I used PhantomJS in the past for automating interactions to web applications. You can write scripts for Selenium to automate all kinds of stuff like visiting a web page, clicking a button, and taking a screenshot of the result. Security Bods (as they’re calling them now) have been using it for years.

To create a simple web page screen scraper , it’s as easy as a few lines of Python. This ends up being pretty nice especially when your friends send you all kinds of malicious stuff to see if you’ll click it. 🙂 This is very simple in Selenium but I wanted to attempt to not look so script-y. The example below is how you would change the useragent string using Selenium:

Playing around with this started bring up questions like: Since PhantomJS doesn’t in fact have a screen, what would my screen resolution be? The answer is 1024×768.

This arbitrarily assigned value is pretty great. That means we can replace this value with something else. It should be noted that even though you set this value to something different, it doesn’t affect the size of your window. To defend against being “Actively Defended” against, you can change the PhantomJS code and recompile.

This will take a few extra screen resolutions every time a new webdriver browser is created. You can test it back at BrowserSpy.

Old:

New:test3

And so on…

And we’ve now spoofed a single fingerprintable value only another few thousand to go. In the end, is this better than scripting something like Firefox? Unknown. But the offer still stands that if someone at Juniper wants to provide me with a demo, I’d provide free feedback on how well it stands up to edge cases like me.

 

Updates to Raspberry Bridge

Sep 21 2014 Published by under privacy,Raspberry Pi,Tor

I’ve updated the Raspberry Bridge build to 1.5.1Beta to update a few things and address a couple issues. The main changes are:

  • updated Tor to latest stable release
  • updated obfsproxy
  • updated OS including some security patches

Download Torrent: http://rbb.antitree.com/torrents/RBBlatest.torrent

More info: http://rbb.antitree.com/

XKeyScore

Jul 05 2014 Published by under Intelligence,OSINT,privacy,Tor

If you’re like me, you’re probably getting inundated with posts about how the latest revelations show that NSA specifically tracks Tor users and the privacy conscious. I wanted to provide some perspective of how XKeyscore fits into an overall surveillance system before jumping out of our collective pants. As I’ve written about before, the Intelligence Lifecycle (something that the NSA and other Five Eyes know all to well) consists more-or-less of these key phases: Identify, Collect, Process, Analyze, and Disseminate. Some of us are a bit up-in-arms, about Tor users specifically being targeted by the NSA, and while that’s a pretty safe conclusion, I don’t think it takes into account what the full system is really doing.

XKeyscore is part of the “Collect” and “Process”  phases of the life cycle where in this case they are collecting your habits and correlating it to an IP address. Greenwald and others will show evidence that the NSA’s goal is to, as they say “collect it all” but this isn’t a literal turn of phrase. It’s true there is a broad collection net, but the NSA is not collecting everything about you. At least not yet.  As of right now, the NSA’s collection initiatives lean more towards collecting quantifiable properties which have the highest reward and the lowest storage cost. That’s not as sexy of a phrase to repeat throughout your book tour though.

52164288[1] OR 52164332[1]

 

The conclusion may be (and it’s an obvious one) what you’re seeing of XKeyscore is a tiny fraction of the overall picture. Yes they are paying attention to people that are privacy conscious, yes they are targeting Tor users, yes they are paying attention to people that visit the Tor web page. But as the name implies, this may contribute to an overall “score” to make conclusions about whether you are a high value target or not. What other online habits do you have that they may be paying attention to. Do you have a reddit account subscribed to /r/anarchy or some other subreddit they would consider extremist. Tor users aren’t that special, but this section of the code is a great way to get people nervous.

As someone who has worked on a collection and analysis engine at one time, I can say that one of the first steps during the collection process is tagging useful information, and automatically removing useless information. In this case, tagging Tor users and dropping cat videos. It appears that XKeyscore is using a whitelist of properties to what they consider suspicious activity, which would then be passed on to the “Analysis” phase to help make automated conclusions. The analysis phase is where you get to make predictive conclusions about the properties you have collected so far.

intel_lifecycle_xkeyscore

Take the fact that your IP address uses Tor. Add it to a list of extremist subreddits you visit. Multiply it by the number of times you searched for the phrase “how to make a bomb” and now you’re thinking of what the analytics engine of the NSA would look like.

My point is this: If you were the NSA, why wouldn’t ‘you target the privacy aware? People doing “suspicious” (for some definition of the word) activities are going to use the same tools that a “normal” (some other definition) person would. We don’t have a good understanding of what happens to the information after it’s been gathered. We know that XKeyscore will log IP’s that have visited sites of interest or performed searches for “extremist” things like privacy tools. We know that there have been cases where someone’s online activities have been used in court cases. But can’t connect the dots.  XKeyscore is just the collection/processing phase and the analytic phase is what’s more important. I think the people of the Tor Project have a pretty decent perspective on this. Their responses have generally just re-iterated that this is exactly the threat model they’ve always planned for and they will keep working on ways to improve and protect its users.

 

 

JTRIG and Private Intel Agencies

Feb 25 2014 Published by under Intelligence,privacy

Last year, I was bit by the idea of intel as a research project. I presented at BSidesDetroit on the topic of corporate espionage and the contrast between HUMINT and TECHINT. My Defcon Skytalk was titled “Bringin Intelligence Back To The Hacker Community” and I did a GRRConCon talk on the capabilities and structure of a normal private intelligence campaign. The research had a side-affect of replacing a generally apathetic outlook on the topic, with a more specific abhorrence toward the intelligence community as a whole — specifically private intelligence groups working under the auspices of the U.S government and other nation states.

Of course this research project came at an opportune time with the recent NSA revelations substantiating many of the claims, the recent articles about the JTRIG program really has hit home.

https://firstlook.org/theintercept/2014/02/24/jtrig-manipulation/

https://firstlook.org/theintercept/document/2014/02/24/art-deception-training-new-generation-online-covert-operations/

http://investigations.nbcnews.com/_news/2014/01/27/22469304-snowden-docs-reveal-british-spies-snooped-on-youtube-and-facebook?lite

http://www.nbcnews.com/news/investigations/war-anonymous-british-spies-attacked-hackers-snowden-docs-show-n21361

http://www.nbcnews.com/news/investigations/snowden-docs-british-spies-used-sex-dirty-tricks-n23091

Each of these articles released by Glen Greenwald  and NBC News reference JTRIG – a program designed to manipulate the hearts and minds of Internet users. Targeting individuals, organizations, and in some cases just the general ideas with the goal of destroying them. Programs like SQUEAKY DOLPHIN, for example, were designed to analyze social networking patterns of all users be it Youtube or Blogger or Facebook. We can all agree that the use-case of this capability has some positive implications like infiltrating Al-Qaeda training forums or the like.

Not even mad

There’s definitely part of me that can sit back and just say “I’m not even mad. That’s amazing” from a purely technological standpoint. Part of my presentations on the subject of OSINT came to the conclusion that small-time intel groups pale in comparison to well funded private organizations like HBGary and Palantir. I talked about how HBGary would pull stunts on forums and IRC, specifically targeting ideas and individuals that they were hired to attack — protesters for a large company in one example. JTRIG and the other programs above, are examples that this is not just HBGary or Palantir but the entire intelligence community.

madamazingI even found myself, falling down the rationalization stairs, convincing myself that this is what’s expected. They’re the U.S., and they’ve realized that the Internet is powerful, and they want to use it as a weapon. In fact, the U.S has realized a gap in their capabilities to collect information on the Internet in a paper from 1998 which first defined the problem of the “Intelligence Gap” — the increasing ratio of the number collection sources to actionable intel.  And you can see my disillusionment in my presentations. The Defcon Skytalks version of “Bringing Intelligence Back To The Hacker Community” was generally a fun, optimistic look at intelligence capabilities and even a structure for collection and analysis. Where the GRRCon talk generally had a conclusion of “Yeah that was nice, but you are all fucked in comparison to private intel groups.”

*Shrug*; infosec.

The most depressing result that all of this new information has had on the public and the Information Security community is… nothing. Either the Infosec professionals I’ve talked to lately have withdrawn themselves from the situation out of hopelessness, they’ve generally become jaded, or they actually work in the intelligence community. I’ve heard the tongue-in-cheek comments of “Well it’s good for us” in that it’s our job to now provide security solutions to a the new reasonable threat of a global adversary. I know people who are now signing up to become military intelligence operatives, seeing the career path of working for the government and then leaving to a private-sector, high-paying, intelligence career. People have even admitted to me that the government has called them up and asked to snatch their idea stating they would pay him millions of dollars. And how can you blame him? Morality, ethics, and not-being-a-dick-ism is difficult to maintain when faced with piles of money. Maybe this is where Info-Sec and Hacker will further fraction off. Maybe I’m just being naive.

Cables Communication: An anonymous, secure, message exchange

Nov 17 2013 Published by under Crypto,privacy,Tor

Last night at Interlock Rochester, someone did a lightning talk on Liberte Linux — one of those anonymity Linux distros similar to TAILS and the like. Everything seemed pretty standard for an anonymity machine, all traffic was tunneled over Tor using iptables, only certain software was able to be installed, full disk encryption, memory wiping — But one thing stuck out, this service called “Cables.”

Cables Communication:

Cables (or Cable I really don’t know) is designed by a person that goes by the name, Maxim Kammerer. He is also the creator of Liberte Linux. Its purpose is to let user A communicate with user B, in an E-mail-like way, but with anonymity and security in mind. Before this, Tor users would use services like TorMail, until that was taken down. I don’t know if that was the inspiration for this new service, but it seems like it’s an attempt to fill that hole. 

liberte-logo-600px[1]

Overview:

Here’s a very simplified functional overview:

  1. User generates a Cables certificate which is a 8192 bit RSA X.509 certficate
  2. The fingerprint for that certificate is now that user’s username (like “antitree” is the username of “[email protected]”)
  3. A Tor hidden service is created and that is the domain of the address you’re sending (e.g xevdbqqblahblahlt.onion)
  4. This “Cables Address” is given to the trusted party you’d like to anonymously communicate with
  5. You setup a mail client like Claws, to handle communications to send these messages to the addresses
  6. Your email is saved into the Cables “queue” and is ready to be sent
  7. Cables then looks for the receipient’s Cables service, and lets it know it has a new message
  8. Once a handshake is complete, the message is sent to the user

So that’s a userland understand of it, but I’m glossing over a lot of important stuff like how Cables talks to another Cables service, what kind of encryption is being used, key exchange things… You know, the important stuff.

Cables messages are CMS

There are two important parts to understand about cables. The encrypted message format, and the way that it communicates information. Cables uses an implementation of the Cryptographic Message Syntax (CMS), which is an IETF standard for secure message communications. You can read more about how CMS is supposed to work here. In short, Cables messages are the CMS format implemented with X.509-based key management. My take-away from this is “Good – it’s not designing a new crypto.”

Communication Protocol

Although email is the most common example, Cables is not stuck in one way in which you communicate these messages – it’s transport independant. From what I see, it could be used securely share file to another user, as an instant message service, or exchanging information over some new communication protocol. This is why it fits nicely using Tor (and I2P) and a transport.

All communications are done via HTTP GET requests. This is a big part of understanding how this works. Cables comes with wrappers to help it feel more like a mail protocol, but all transmissions communicate over HTTP. For example, if you you look at the “cable-send” command in the /home/anon/bin path, you’ll notice it’s just a bash script wrapper for the “send” command. But that’s ALSO a bash script to interpret the mail message format and save it into a file path.

HTTP Server

This HTTP service is facilitated by the appropriately named “daemon” process which is the service shared via your .onion address. This service runs on local port 9080, and is served up on port 80 of the hidden service. So if you visit your hidden service address on that port, you receive a 200 ok response. But if you give it a URL like the one below, you can actually access files on the file system. There is a 1 to 1 translation between the files saved in the path /

certs

 

This web service responds to every request, but only certain requests deliver content. Here are the standard ones:

  • /{Username}/
    • certs/ca.pem
    • certs/verify.pem
    • queue/{message_id}
    • rqueue/{message_id}.key
    • request/

Most of these just serve up files sitting on the file system. /request/ initiates the service requests and starts the transfer.

Daemon push?

If you didn’t think so already, this is where it starts to get odd. This daemon regularly looks inside the queue folder (using inotify) for new messages to send. This is done on a random basis so that new messages are not sent out at the exact same time, each time. This is an attempt to prevent traffic fingerprinting  — an attack where someone is able to sniff your traffic and based on the traffic schedule, predict that you’re using Cables. In other words, when you go to send a secure message, what you’re doing is taking a message, encrypting it into the designated CMS specification, and plopping in a special place on your hard drive. Then the daemon service looks in that path and decides if you’re trying to send something.

Ok Crypto

I don’t yet have a grasp on the crypto besides understanding that it generates a certificate authority on first run which helps generate the rest of the keys used to sign, verify, and encrypt messages. It uses an implementation of Diffie-Helman for ephemeral keys and then there’s some magic in between the rest. 🙂 My first take on this is that its weakest points are not the cryptography, but the communication protocol.

Impressions So Far

If I’m scoping this project for potential attack vectors, I’d predict that there’s going to be something exploitable in the HTTP service implementation. That’s me being a mean, security guy, but I feel that’s going to have the lowest hanging fruit. This is mitigated by the fact that Tor hidden services are not something you can discovery, so even if there was an attack, the attacker would need to know your hidden service, and probably your username.

Although I keep mentioning that it’s aimed at being an email replacement, there’s one major difference which is that instead of having a dedicated SMTP server, messages are sent directly to the recipient which means that that user must have his box up and running. The default timeout for messages to fail is 7 days.

I think so far that CMS is a good way to go, but using GET PUSH style HTTP for transmission, might prove to be its eventual downfall. I’m not poo-pooing the project, but there are some challenging hurdles that it’s aiming to leap.

Malicious Exit Nodes: Judge Dredd or Anarchy?

Mar 29 2013 Published by under privacy,Tor

InspecTor is a .onion page that kept track of bad exit nodes on the network. And it did a pretty good job. It looked for things like:

  • SSL Stripping: Replacing HTTPS links with HTTP
  • JavaScript injection
  • iFrame injection
  • Exit nodes that have no exit policy (black holes)

Those are the easy to quantify bad properties. We can compare the results of connecting to a bad Exit Node and a good one and diff the results. These are some of the grey areas it also tries to look for:

  • Warning about similar nodes in the same netblock
  • Watch for similar named nodes spinning up hundreds of instances
  • Look at the names of the nodes and conclude that they’re bad (e.g. NSAFortMeade)

The worst case scenario for a service like this, is that first, they’re wrong and kick off a perfectly good Exit Node. Second, they make users use custom routes to evade the bad nodes. Doing so means that your network traffic has a fingerprint. “He’s the guy that never users Iranian exits” for example.

And that’s kind of what happened with InspecTor – now celebrating the anniversary of it’s retirement a year ago. He went Judge Dredd on Tor and started making broad conclusions on what nodes were evil. For example, he said that NSAFortMeade is obviously an Exit Node owned by the NSA assumedly to catch the traffic of Americans (because they can’t do that already?). Other conclusions stated that a family of Tor nodes were from Washington DC. One of them was malicious so the conclusion was that it was probably the Government keeping an eye on us.

Tor’s Controls

What does Tor have as a control mechanism if they do somehow come across a bad exit node? The protocol has a “bad-exit” flag in it so that authorities can let Tor users that this Exit-Node should be avoided. That flag is set by The Tor Project admins as far as I know and you have to be blatantly offensive to cause this to happen. Here is the _total_ list  of nodes that are blocked today:

agitator agitator.towiski.de [188.40.77.107] Directory Server Guard Server
Unnamed vz14796.eurodir.ru [46.30.42.154] Exit Server Guard Server
Unnamed vz14794.eurodir.ru [46.30.42.152] Exit Server Tor 0.2.3.25 on Linux
Unnamed vz14795.eurodir.ru [46.30.42.153] Exit Server Guard Server

http://torstatus.blutmagie.de/index.php?SR=FBadExit&SO=Desc

This says that there are four bad nodes (one’s a bad directory server) on the network right now. I think most people would agree that is a bit low. You can take a look at this link for a complete list of the nodes they’ve blocked in the past. You should notice that a bad-exit flag doesn’t kick them off the network, it just tells the client to never use them as an exit. So these nodes can stay online as long as they want but they’ll never be used.

The Point

The point is not to just say everything sucks. How Tor isn’t doing a good job at monitoring for Exit Nodes or how InspecTor was doing too good of a job for it’s own good. It’s to highlight the real-world problem in Tor. Unlike the sexy theoretical attacks we like to wrap our heads around like global adversaries correlating your traffic back to an individual IP by statistically analyzing your web history patterns, the most likely thing to happen to you is that some douche nuckle is running dsniff and ulogd. And the point is also to highlight a need for a replacement of Snakes On A Tor. You can tell by it’s name, it’s a bit outdated. That is something actively being worked on but it may be a while before something reliable comes out of it.

 

How Tor does DNS: The Breaking Bad Way

Feb 22 2013 Published by under privacy,Tor

Let me start by answering the short version of the question: Tor usually performs DNS requests using the exit node’s DNS server. Because Tor is TCP, it will only be able to handle TCP based DNS requests normally. Hidden services though are very different and rely on Hidden Service Directory Servers that do not use DNS at all. Read on if you don’t believe me or want more information.

Here’s a reference from an old mailing list entry:

Section 6.2 of the tor-spec.txt[5] outlines the method for connecting to a specific host by name. Specifically, the Tor client creates a RELAY_BEGIN cell that includes the DNS host name. This is transported to the edge of a given circuit. The exit node at the end of the circuit does all of the heavy lifting, it performs the name resolution directly with the exit node’s system resolver. …For the purposes of DNS, it’s important to note that a node does not need to be marked as an exit in the network consensus to perform resolution services on behalf of a client. Any node that doesn’t have an exit policy of ‘reject *:*’ may be used for DNS resolution purposes. [1]

Pudding for the Proof:


Don’t believe me? Let’s test it out. If I run an exit node and then try to use it for a circuit, my DNS requests should go through it right? I’ve spun up an exit node named “BrianCranston” and I’ll setup a client (who I’m calling Aaron Paul)  to only use this box as it’s exit node. You can do this by adding the following to your TORRC file:

And on my exit node, I do a tcpdump of all traffic on port 53. On my client I start looking for BrianCranston’s websites at briancranston.com and briancranston.org. Lets see what it looks like:

exit6

You’ll notice that Tor is being a little cheeky with the way it resolves DNS records.  briancranston.org turns into bRiancRAnsTON.org. I don’t know if this is just a nice way to let Exit Node operators know which hosts are being resolved by Tor or what.  The reason for the odd casing in the DNS response is actually thanks to the Dan Kaminskys. Back in 2008, when exploiting DNS flaws was all the rage, one of the remediations for a DNS cache poisoning attack, was to randomize the casing of a DNS request. This was called the “0x20 hack” because if you look at an ASCII table of letters, you’ll notice that the difference between the letter A and the letter a, is 0x20. For example “a” is 0x61, and “A” is 0x41. The point here is that if someone wanted to attempt to pull off a cache poisoning attack, they’d have to brute force the possible case combinations. I’m told that modern browsers have this function built in now, but back in 2008, Tor was on the bleeding edge and its stayed in because it really doesn’t matter if a DNS query is 0x20’d multiple times. Thanks to Isis for pointing this out.

UDP:

Billy Mays here. Tor is a TCP only network so what happens when it you need to use UDP services like DNS? The answer is pretty simple, it just doesn’t do it. Colin Mulliner and came up with a solution to this which was to relay UDP based DNS requests using a tool he wrote called TTDNS.(If you’ve ever used TAILS, this is what it uses.) In short, it takes a UDP based DNS query, converts it to TCP, sends it out over Tor, and converts the reply back to UDP once it’s been received.

Tor doesn’t natively support UDP based DNS queries, but Tor also only does two types of DNS queries: A records, and PTR records. It skips around needing to use CNAME by converting them to A records but officially, those are the only two supported.

Tools:

There are a couple of other items to note related to DNS. One is that there is a built-in tool called “tor-resolve.” Guess what it does… make DNS queries over the Tor network. This is useful for command-line scripts that are trying to resolve a host.

The other, is a TORRC option that will open up a port to provide DNS resolution over Tor. Once enabled, you can use the local host as a DNS resolver on the port you specify. Again, this is how TAILS handles DNS resolution.

What about Hidden services?

This is fine and dandy to resolve google.com, but what about a hidden service with a .onion address. Connecting to google.com goes out an Exit Node, but connecting to an .onion address never leaves the Tor network. In fact, Tor doesn’t even use DNS to resolve .onion addresses at all. Here’s how that works.

The names generated for .onion addresses are not just random values unique to your host, they are a base32 encoded version of the public key associated to your hidden service. When you create a hidden service, you generate a priv/pub key pair. This .onion address, the port it’s listening on, some other useful classifiers, and a list of “rendezvous points” are published to Tor hidden service directory nodes. The rendezvous points are locations on the Tor network where a client can initiate a connection to the hidden service.

So, following our Breaking Bad theme, if we had Brian (Cranston) and Aaron (Paul) wanted to exchange a secret web page that keeps track of all the meth they’ve sold, this is what the flow looks like:

  1. Brian modifies the TORRC to offer a service on an IP address and port (127.0.0.1:443)
  2. Brian creates a keypair for the service and the .onion address is saved (briancranston.onion)
  3. Brian’s Tor client sends a RELAY_COMMAND_ESTABLISH_INTRO to start creating introduction points
  4. Brian’s client sends the descriptors (introduction points, port, etc) to the Hidden Service Directory Servers
  5. Brian then sends Aaron his .onion address (briancranson.onion)
  6. Aaron’s client checks the Hidden Service Directory Server to see if the address exists
  7. Aaron’s Tor client makes a circuit to one of the introduction points
  8. Aaron connects to the introduction point and tells it about his rendezvous point.
  9. This rendezvous point is passed to Brian
  10. Brian connects to Aaron’s rendezvous point
  11. The rendezvous point lets Aaron know that Brian’s service has been forwarded at that point
  12. Aaron finally makes a connection to Brian’s service

EDIT: Modified to differential between “rendezvous” and “introduction points” in the steps. Thanks to Isis for pointing that out.

So in short, hidden services are resolved using Hidden Server Directory Servers and the Tor client. There currently is no way (AFAIK) to manually just resolve onion addresses. That means, if you’re trying to connect to a hidden service using a script, you’ll have to properly tunnel the requests through Tor. That’ll be for another day. torba

If you need more information, check out these links:

http://archives.seul.org/or/talk/Jul-2010/msg00007.html – old mailing list message about DNS. A bit out dated but very useful

https://gitweb.torproject.org/torspec.git?a=blob_plain;hb=HEAD;f=rend-spec.txt – discusses the rendezvous protocol specification that is the basis of hidden services.

Instastalk: Using the Instagram API to track users locations

Jan 27 2013 Published by under lulz,OSINT,privacy,Python

Quick blog post — thought it would be funny to make an Instagram script that will download all the locations of a user account. You can find the details on how to use it on Github. Pretty straightforward:

You’ll need to sign up for the InstagramAPI which you can do here: http://instagram.com/developer/

And you can find your friend’s InstagramID using this handy tool here: http://jelled.com/instagram/lookup-user-id

Download the code from Github here: https://github.com/antitree/instastalk

Here’s me keeping track of Berticus:

instastalkberticus

 

Using the Good Of Panopticlick For Evil

Jan 25 2013 Published by under OPSEC,privacy,Tor

Browser fingerprint tactics, like the ones demonstrated in Panopticlick have been used by marketing and website analytic types for years. It’s how they track a user’s activities across domains. Just include their piece of JavaScript at the bottom of your page and poof, you’re able to track visitors in a variety of ways.

I don’t care much about using this technology for marketing, but I do care about using this type of activity for operational security purposes. Imagine using this technique as a counter-intelligence tactic. You don’t want to prevent someone from accessing information, but you do want to know who is doing it, especially if they have ill intentions in mind. IP addresses are adorable but hardly reliable when it comes to anyone that knows how to use a proxy, so using a fingerprint application, like Panopticlick, we can see who is visiting the site no matter what their locations appears to be.

Here’s a simple way of using Panopticlick’s JavaScript for your own purposes to gather fingerprint information about your browser. I’ll leave it up to you to figure out what you can do with this.

“More Worser”

Panopticlick’s informatino gathering techniques are very similar (see the same) as Browserspy except that they correlate the results to a dataset. If you really wanted to do all the browser fingerprinting without any of the reporting, you can take a look at the BrowserSpy code.

I’ve also worked on a technique years ago that attempts to verify your IP address using DNS. This was a pretty good technique especially for third party plugins like Flash and Java which were inconsistent when it comes to using proxies correctly. For more information about using DNS to extract an IP address and further gather information about a user, check out HD Moore’s now decommissioned Decloak project.

 

Panopticlick, Tor, Hello Again

Jan 22 2013 Published by under OSINT,privacy,Tor

Panopticlick is a project run by the EFF that highlights the privacy concerns related to being able to fingerprint your browser. It suddenly popped back up in /r/netsec like it was a  new project. The site works by showing you the results of a full fledge browser fingerprint tool, letting you compare how similar or dissimilar you are to other visitors. This is done in a variety of ways. By looking at the user agent, screen resolution, fonts installed, plugins installed, versions of those plugins, and much more. You can read the Panopticlick whitepaper if you want to understand more about how it works.

Hipster Tor: Privacy before it was cool

The issue was discussed years ago at Defcon XV where I first got interested in the project. They identified browser fingerprinting as concern that needed to be addressed in Tor. Their answer at the time was to use something they had just released called “TorButton.” TorButton, back in the day, was a Firefox plugin that when enabled, changed all the settings in your Firefox browser to stop leaking private information like those that Panopticlick checks.

TorButton (Mike Perry) soon realized that this was a loosing battle with Firefox who were trying to compete with sexy new browsers by adding in all kinds of automatic, privacy blind, features like live bookmarks. These things would just constantly query your bookmarks for updated content and had no way of reliably forwarding through a SOCKS proxy and anonymized, making it a major concern. This lead to the advent of the Tor Browser bundle which is a forked version Firefox, compiled specifically with privacy in mind, and the recommended way of using Tor today.

Panopticlick v. Tor

Back to Panopticlick: Tor’s Browser bundle (along with integrated TorButton) tries to defend you against this type of attack. It changes the user agent to the most common one at the time, disables JavaScript completely, spoofs your timezone, and more. Take a look at the comparison between the Tor Browser bundle, Chrome, and Chrome for Android:

Browser Characteristic Tor Windows 7 Chrome Android Chrome
User Agent 78.88 1489.11 36249.45
HTTP_ACCEPT Headers 31.66 12.76 12.76
Browser Plugin Details 25.89 2646146 25.89
Time Zone 21.63 11.04 11.04
Screen Size and Color Depth 46.78 46.78 7714.9
System Fonts 8.5 2646146 8.5
Are Cookies Enabled? 1.34 1.34 1.34
Limited supercookie test 8.91 2 2

Numbers based on 1 in x visitors have the same value as your browser

Feel safer? Don’t.

The EFF’s project has been really good at increasing the public understanding of the risks of browser fingerprint style attacks, but risks definitely remain. One of the nastier ones, which has yet to be fully addressed, has been only theorized until last year. The scenario is that someone watching a user’s activities, can fingerprint their online activities. A presentation at last year’s 28C3 highlighted this issue. In it, they discussed how a user will usually go to the same groups of websites pretty consistently: Reddit, Google News, Wikipedia. Those activities can be used as a fingerprint for your online identity. Tor is coming up with an answer to this with their Moduler Transports initiative which allows Tor users to customize the traffic footprint using plugins.

My next post will highlight how to use Panopticlick for some operational security measures. 🙂