Crack: Google Authentication Services are Vulnerable

There is a vulnerability in the way Google authentication service works. Whenever you login to any of the Google’s online services like GMail, Orkut, Groups, Docs, Youtube, Calendar etc., you are redirected to an authentication server which authenticates against the entered username and password and redirect back to the required service (GMail, Youtube etc.) setting the session variables.

Now, if you are able to grab the url used to set the session variables, you can login as the user to whom that url belongs from any machine on the Internet (need not be the machine belonging to the same subnet) without entering the username and password of the user.

The proxy servers in the organizations can be used to exploit this vulnerability. Squid is the most popular proxy server used. In the default configuration, squid strips the query terms of a url before logging. So, this vulnerability can’t be exploited. But if you turn off the stripping mechanism by adding the line shown below, then squid will log the complete url.

strip_query_terms off

So, after turning stripping mechanism off, the log will contain urls which will look like this

http://www.google.co.in/accounts/SetSID?ssdc=1&sidt=Q5UrfB0BAAA%3D.oHVGErODzffQ%2Bms%2FOKfk53g5naReDKehRNHOBsmJlBu3VTNXjF03SbgX%2FVEEhmImhR4mlu5IAAjM%2BdbuXvMMSIb0oU8IGCYpnLcSNkbCIrG%2BQnm81YmX5%2Brcrq7U6Qx65%2F1yaQ2NzgmKD94jg0Iw13iXDen3qD5qn6L%2FhmmYWwTrcOeuTzGbO%2BAehpjEU3mrWapRafaq3b4kxyigJ68s8QrGQqZTINNE%2Bs%2BoIkZWmGt5kNzoT8fkVAsWJeu3CKFkxj4oVMngeDvpwb1nyFpsJCltOzmAr46fTxVJSpvQdx0%3D.BMLtjUdIDCcuszktZSvYzA%3D%3D&continue=http%3A%2F%2Fwww.orkut.com%2FRedirLogin.aspx%3Fmsg%3D0%26ts%3D1226148773097%3A1226148773386%3A1226148774868%26auth%3DDQAAAIcAAAC1pPE1QT4chKgrU4B3oyKZrQRkEVPtYlclpESQoXV_d9x9gdoe75Z0hfJ_22Pn5tVMR7j-uV5YCps3NB48L0bFlDeX-4PGHVT6Loztp_ru3tAy_gxDa9_YAEbz4d9CO4wD2VTKtzax9zvpGgrnJVZQfoWPkkIomUmxDtVGoH7g3fA3UjS0vdBJ2PJtgFMElso

Replace .co.in with your tld specific to your country. If you paste this url in any browser, it’ll directly log you in and you can do whatever you want to that account. Remember that all such urls remains valid only for two minutes. So, if you use that url after two minutes, it’ll lead nowhere.

At the time of writing this post Orkut, Google Docs, Google Calendar, Google Books and Youtube are vulnerable.

So, make sure your squid has stripping mechanism turned on and your squid server is properly firewalled.

You can watch the Video proof for Orkut on Blip.tv, Youtube.

 

Humour: Funny Apache Logs

The other day I was debugging my drupal installation and had a look at the Apache error logs. And this is what I found 😀

[root@gofedora html]# tail -f /var/log/httpd/error_log
[Fri Nov 28 21:00:16 2008] [warn] long lost child came home! (pid 23229)
[Fri Nov 28 21:00:16 2008] [warn] long lost child came home! (pid 23230)
[Fri Nov 28 21:00:16 2008] [warn] long lost child came home! (pid 23231)
[Fri Nov 28 21:00:16 2008] [warn] long lost child came home! (pid 23232)
[Fri Nov 28 21:00:16 2008] [warn] long lost child came home! (pid 23233)
[Fri Nov 28 21:00:16 2008] [warn] long lost child came home! (pid 23234)
[Fri Nov 28 21:00:16 2008] [warn] long lost child came home! (pid 23235)
[Fri Nov 28 21:00:16 2008] [warn] long lost child came home! (pid 23236)
[Fri Nov 28 21:00:16 2008] [warn] long lost child came home! (pid 23237)
[Fri Nov 28 21:00:16 2008] [warn] long lost child came home! (pid 23238)

Reminded me of famous Indian saying, “Kumbh ke mele mein khoya wapis aa gaya“.

And Apache logs it as a warning. You long lost child has come home. You gotta run 😛

 

IntelligentMirror: RPM and DEB Caching Improved (0.5)

After spending a lot of time with youtube cache, now I am trying to devote some time to update intelligentmirror with required features and enhancements that youtube cache already enjoys. In the same direction here is version 0.5 of intelligentmirror.

Improvements

  • Added max_parallel_downloads options to controll the maximum threading fetching from upstream to cache the packages.
  • Fine grained control on logging via max_logfile_size and max_logfile_backups option.
  • Added setup script to help you install intelligentmirror. No need to execute commands one by one for installation. Just run
 [root@localhost]# python setup.py install [ENTER]
  • Added update script (update-im). So in case you decide to change the locations for caching rpm/deb packages, just run
 [root@localhost]# update-im [ENTER]

OR

 [root@localhost]# /usr/sbin/update-im [ENTER]
  • Download scheduler similar to youtube cache is added to facilitate the download queing in case of large number of requests.
  • More informative logging.
  • cache.log is not flooding anymore with XMLRPC logs and python tracebacks.
  • Added extensive exception handling thoughout the program.

Availability

  1. RPMs for Fedora/Red Hat/Cent OS
  2. Source RPMs for Fedora/Red Hat/Cent OS
  3. Source Tar balls

Installation and Configuration

INSTALL and README files should help you throughout the installation and configuration process.

In case you have questions, ask them here in comments. Suggestions for improvement are welcome 🙂

 

How To: Configure Caching Nameserver (named)

Mission

To configure a caching nameserver on a local machine which will cascade to another previously configured and functional nameserver (may or may not be caching. It’ll generally be your ISP nameserver or the one provided by your organization).

Advantage

  • Reduces the delay in domain name resolution drastically as the requests for frequently accessed websites are served from cache.

Working

  • named gets a request for domain resolution.
  • It checks whether the request can be satisfied from cache. If the answer is in cache and not stale, the request is satisfied from cache itself saving a lot of time 🙂
  • If request can’t be satisfied from cache, named queries the first parent. If it replies with the answer, then named will cache the response and subsequent requests for the same domain name will be satisfied from the cache.
  • In case first parent fails to reply, named will query the second parent and so on.

(The working is my understanding of caching-nameserver using wireshark as traffic analysis tool and caching-nameserver may not behave exactly as explained above.)

How to install

named is by default on most of the systems by the package name ‘caching-nameserver‘. If its not present on your system, install using

[root@localhost ~]# yum install caching-nameserver [ENTER]
# If that doesn't work try this
[root@localhost ~]# yum install bind [ENTER]

How to configure

The main configuration file for named resides in /var/named/chroot/etc/named.caching-nameserver.conf which is also soft linked from /etc/named.caching-nameserver.conf . named configuration file supports C/C++ style comments.

For a caching nameserver which will cascade to another nameserver, there is nothing much to be configured. You need to configure “options” block. Below is a configuration file for a machine with IP address 172.17.8.64 cascading to two nameserver 192.168.36.204 and 192.168.36.210. The comments inline explain what each option does.

options {
  // Set the port to 53 which is standard port for DNS.
  // Add the IP address on which named will listen separated by semi-colons.
  // It'll be your own IP address.
  listen-on port 53 {127.0.0.1; 172.17.8.64;};
  // These are default. Leave them as it is.
  directory   "/var/named";
  dump-file   "/var/named/data/cache_dump.db";
  statistics-file "/var/named/data/named_stats.txt";
  memstatistics-file "/var/named/data/named_mem_stats.txt";
  // The machines which are allowed to query this nameserver.
  // Normally you'll allow only your machine. But you can allow other machines also.
  // The address should be separated by semi-colons. To allow a network 172.16.31.0/24,
  // the line would be
  // allow-query {localhost; 172.16.31.0/24; };
  // Don't forget the semi-colons.
  allow-query     { localhost; 172.17.8.64; };
  recursion yes;
  // The parent nameservers. List all the nameserver which you can query.
  forwarders { 192.168.36.204; 192.168.36.210; };
  forward first;
};
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
zone "." IN {
  type hint;
  file "named.ca";
};
include "/etc/named.rfc1912.zones";

Start caching-nameserver

Now start the caching-nameserver using the following command

[root@localhost ~]# server named start [ENTER]

OR

[root@localhost ~]# /etc/init.d/named start [ENTER]

To make named start every time your reboot your machine use following command

[root@localhost ~]# chkconfig named on [ENTER]

Using caching-nameserver

To use your caching-nameserver, open /etc/resolv.conf file and add the following line

nameserver 127.0.0.1

Comment all other lines in the file, so that finally the file looks like

; generated by /sbin/dhclient-script
#search wlan.iiit.ac.in
#nameserver 192.168.36.204
#nameserver 192.168.36.210
nameserver 127.0.0.1

Now your system will use your own nameserver (in caching mode) for resolving all domain names. To test if your nameserver use the following command

[root@localhost ~]# dig fedora.co.in [ENTER]

Now if you use that command for the second time, the resolution time will be around 2-3 milli seconds while first time it would be around 400-700 milli seconds.

Example

Below is two subsequent runs of dig for fedora.co.in . Notice the Query time.

[root@bordeaux SPECS]# dig fedora.co.in
; <<>> DiG 9.4.2rc1 <<>> fedora.co.in
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7839
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;fedora.co.in.                  IN      A
;; ANSWER SECTION:
fedora.co.in.           83629   IN      A       72.249.126.241
;; AUTHORITY SECTION:
fedora.co.in.           79709   IN      NS      ns.fedora.co.in.
;; ADDITIONAL SECTION:
ns.fedora.co.in.        79709   IN      A       72.249.126.241
;; Query time: 531 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Nov 19 18:04:47 2008
;; MSG SIZE  rcvd: 79
[root@bordeaux SPECS]# dig fedora.co.in
; <<>> DiG 9.4.2rc1 <<>> fedora.co.in
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64233
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;fedora.co.in.                  IN      A
;; ANSWER SECTION:
fedora.co.in.           83625   IN      A       72.249.126.241
;; AUTHORITY SECTION:
fedora.co.in.           79705   IN      NS      ns.fedora.co.in.
;; ADDITIONAL SECTION:
ns.fedora.co.in.        79705   IN      A       72.249.126.241
;; Query time: 1 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Nov 19 18:04:51 2008
;; MSG SIZE  rcvd: 79
[root@bordeaux SPECS]#
 

IntelligentMirror Gets Even More Intelligent (1.0.1)

Warning : This version of IntelligentMirror is compatible with only squid-2.7 as of now. It is NOT compatible even with squid-3.0.

IntelligentMirror Version 1.0.1

I have been following squid development regularly (at least the part in which I am interested) and they have introduced a new directive in squid-2.7 known as StoreUrlRewrite (storeurl_rewrite_program). Using this directive you can instruct squid to cache url A (http://abc.com/foo/bar/version/crap.rpm) as url B (http://proxy.fedora.co.in/intelligentmirror/crap.rpm). In simple words you can direct squid to cache any url as any other url without any extra efforts.

So keeping the above directive in mind, I have worked out a different version of intelligentmirror especially for squid-2.7.

IntelligentMirror : Old method of operation

  1. IntelligentMirror gets a client request for a URL.
  2. Check: if URL is not in (RPM, metadata file)
    • Then its none of our business.
    • Let proxy handle it the normal way.
    • Done and exit.
  3. Check: if RPM/metadata is available in cache
    • Stream the RPM/metadata from cache.
    • Done and exit.
  4. Check: if RPM/metadata is not available in cache
    • Download in parallel for caching in some dir and stream.
    • Done and exit.

IntelligentMirror : New method of operation

  1. IntelligentMirror gets a client request for a URL.
  2. Check: if request for rpm
    1. Direct squid to cache the request as http://<same_host_all_the_time>/intelligentmirror/<rpmname>.rpm
  3. Check: if request for deb
    1. Direct squid to cache the request as http://<same_host_all_the_time>/intelligentmirror/<debname>.deb
  4. Done and exit.

So your squid will see every request for an rpm package as a request http://<same_host_all_the_time>/intelligentmirror/<rpmname>.rpm. So, if you happen to request the same rpm from a different mirror, it’ll still be served from cache 🙂

Improvements

  1. No need to check if the url supplied by squid is for rpm or not because storeurl_rewrite_program has an acl controller attached which will invoke intelligentmirror for urls ending in .rpm .
  2. No need to check if the url is already cached or not. No need to worry about the directory where you are going to store the packages. No human intervention is needed in maintaining the cache. Almighty squid is doing everything for us.
  3. No need to worry if the target package has changed because of the resigning or whatever because squid will do that for you.
  4. No need to actually download the package in parallel for caching because squid is already doing that.
  5. No need to worry about the hashing algorithms and storage optimizations for the cached content.

Availability

  1. RPM for Fedora/Red Hat
  2. Source RPM for Fedora/Red Hat
  3. Source Tarball

Install and Configure

The install and configure files should be enough to guide you through the installation if you choose the tar ball way. Otherwise you can always install from rpm from the above link.

Note1: You have to configure your squid to use intelligentmirror as a plugin even if you install via rpm. Check the configure file at the above link.

Note2: StoreUrlRewrite will probably be available in squid-3.1.