Catching up!

A lot has changed since the last blog post (more than three years). I was happily running a successful business around Videocache till Google decided to push HTTPS really hard and enforced SSL even for video content. That rendered Videocache completely useless as YouTube video caching was the unique selling point. Though people are still using it for other websites (whatever supported and not HTTPS yet), I personally didn’t find it good enough for selling. To add to the trouble, Mozilla and friends announced that there will be free certs for everyone. Now, that took away whatever motivation was left to keep working on Videocache. I decided to open source Videocache and the source is now available on GitHub. If you have better ideas or you are looking forward to make things work by forging certs etc, fork it and give it a shot.

After all that mess with Videocache, I am left with a contract job which is not working out well. So, the learning hat is back and I am trying to catch up with the tech world. I didn’t really want to get into the whole JavaScript framework mess because there is no clear winner and there are too many of them but it looks rather unavoidable or Unflushable (if you remember Jeff from Coupling). I have been trying to make simple apps with Angularjs. There is little documentation and you have to dig really deep at times but still you can make things work if you persist. Once you spend some quality time with it, you may actually start liking Angularjs. So, I did give it some time and implemented angular fronted for Pixomatix-Api (another learning project I am working on) and I think I sort of like it now.

In 2015, if you are web developer, you must know how APIs work and you should be able to consume them. So, to learn to expose APIs and version them properly, I fired a small project Pixomatix. Being a Rails developer, you really get obsessed with it and try to implement everything using Rails. Even when you want an API with 2-3 endpoints, you tend to make the horrible mistake of doing it in Rails. This kept bugging me and a few weeks later, I decided to freshen up my Sinatra memories. But working with Sinatra is not all that easy especially if you are used to all the niceties of Rails. Dug up my attempt of implementing Videocache in Ruby, and extracted few tasks and configurations I had automated long time ago. Ended up working a lot more on it and packages into a template app with almost all the essential stuff. Though I need to document it a little more, the app has got everything needed to expose a versioned API via Sinatra.

On the other hand, I tried to use devise gem to authentication for Pixomatix. It was all good for integration with standard web apps and APIs but it sort of failed me when I tried to make the API versioned. Devise turned out to be black-hole when I tried to dig deeper to make things work. I tried a few other gems which supported token authentication but they were also no good for versioning. Generally, you may not need to version the authentication part of your API, but what if you do! Since, this was just a learning exercise, I was hell bent on implementing this. So, I just reinvented the wheel and coded basic authentication (including token authentication) for the API.

That’s it for this post. I am looking forward to post regularly on the new stuff I learn.

 

My New Book on Squid Proxy Server (A Beginner’s Guide)

I have not blogged since a long time mainly because I was a bit busy authoring a book Squid Proxy Server 3.1: Beginner’s Guide for Packt Publications. The book is an introductory guide to Squid (especially the new features in Squid-3 series) covering both the basic aspects as well as the in dept details for advanced users. The book focuses on learning by doing and provides example scenarios for the concepts discussed throughout the book. Access control configuration, reverse proxying, interception proxying, authentication and other features have been discussed in details with examples.

Checkout the links below:

 

Crack: Google Authentication Services are Vulnerable

There is a vulnerability in the way Google authentication service works. Whenever you login to any of the Google’s online services like GMail, Orkut, Groups, Docs, Youtube, Calendar etc., you are redirected to an authentication server which authenticates against the entered username and password and redirect back to the required service (GMail, Youtube etc.) setting the session variables.

Now, if you are able to grab the url used to set the session variables, you can login as the user to whom that url belongs from any machine on the Internet (need not be the machine belonging to the same subnet) without entering the username and password of the user.

The proxy servers in the organizations can be used to exploit this vulnerability. Squid is the most popular proxy server used. In the default configuration, squid strips the query terms of a url before logging. So, this vulnerability can’t be exploited. But if you turn off the stripping mechanism by adding the line shown below, then squid will log the complete url.

strip_query_terms off

So, after turning stripping mechanism off, the log will contain urls which will look like this

http://www.google.co.in/accounts/SetSID?ssdc=1&sidt=Q5UrfB0BAAA%3D.oHVGErODzffQ%2Bms%2FOKfk53g5naReDKehRNHOBsmJlBu3VTNXjF03SbgX%2FVEEhmImhR4mlu5IAAjM%2BdbuXvMMSIb0oU8IGCYpnLcSNkbCIrG%2BQnm81YmX5%2Brcrq7U6Qx65%2F1yaQ2NzgmKD94jg0Iw13iXDen3qD5qn6L%2FhmmYWwTrcOeuTzGbO%2BAehpjEU3mrWapRafaq3b4kxyigJ68s8QrGQqZTINNE%2Bs%2BoIkZWmGt5kNzoT8fkVAsWJeu3CKFkxj4oVMngeDvpwb1nyFpsJCltOzmAr46fTxVJSpvQdx0%3D.BMLtjUdIDCcuszktZSvYzA%3D%3D&continue=http%3A%2F%2Fwww.orkut.com%2FRedirLogin.aspx%3Fmsg%3D0%26ts%3D1226148773097%3A1226148773386%3A1226148774868%26auth%3DDQAAAIcAAAC1pPE1QT4chKgrU4B3oyKZrQRkEVPtYlclpESQoXV_d9x9gdoe75Z0hfJ_22Pn5tVMR7j-uV5YCps3NB48L0bFlDeX-4PGHVT6Loztp_ru3tAy_gxDa9_YAEbz4d9CO4wD2VTKtzax9zvpGgrnJVZQfoWPkkIomUmxDtVGoH7g3fA3UjS0vdBJ2PJtgFMElso

Replace .co.in with your tld specific to your country. If you paste this url in any browser, it’ll directly log you in and you can do whatever you want to that account. Remember that all such urls remains valid only for two minutes. So, if you use that url after two minutes, it’ll lead nowhere.

At the time of writing this post Orkut, Google Docs, Google Calendar, Google Books and Youtube are vulnerable.

So, make sure your squid has stripping mechanism turned on and your squid server is properly firewalled.

You can watch the Video proof for Orkut on Blip.tv, Youtube.

 

IntelligentMirror: RPM and DEB Caching Improved (0.5)

After spending a lot of time with youtube cache, now I am trying to devote some time to update intelligentmirror with required features and enhancements that youtube cache already enjoys. In the same direction here is version 0.5 of intelligentmirror.

Improvements

  • Added max_parallel_downloads options to controll the maximum threading fetching from upstream to cache the packages.
  • Fine grained control on logging via max_logfile_size and max_logfile_backups option.
  • Added setup script to help you install intelligentmirror. No need to execute commands one by one for installation. Just run
 [root@localhost]# python setup.py install [ENTER]
  • Added update script (update-im). So in case you decide to change the locations for caching rpm/deb packages, just run
 [root@localhost]# update-im [ENTER]

OR

 [root@localhost]# /usr/sbin/update-im [ENTER]
  • Download scheduler similar to youtube cache is added to facilitate the download queing in case of large number of requests.
  • More informative logging.
  • cache.log is not flooding anymore with XMLRPC logs and python tracebacks.
  • Added extensive exception handling thoughout the program.

Availability

  1. RPMs for Fedora/Red Hat/Cent OS
  2. Source RPMs for Fedora/Red Hat/Cent OS
  3. Source Tar balls

Installation and Configuration

INSTALL and README files should help you throughout the installation and configuration process.

In case you have questions, ask them here in comments. Suggestions for improvement are welcome 🙂

 

IntelligentMirror Gets Even More Intelligent (1.0.1)

Warning : This version of IntelligentMirror is compatible with only squid-2.7 as of now. It is NOT compatible even with squid-3.0.

IntelligentMirror Version 1.0.1

I have been following squid development regularly (at least the part in which I am interested) and they have introduced a new directive in squid-2.7 known as StoreUrlRewrite (storeurl_rewrite_program). Using this directive you can instruct squid to cache url A (http://abc.com/foo/bar/version/crap.rpm) as url B (http://proxy.fedora.co.in/intelligentmirror/crap.rpm). In simple words you can direct squid to cache any url as any other url without any extra efforts.

So keeping the above directive in mind, I have worked out a different version of intelligentmirror especially for squid-2.7.

IntelligentMirror : Old method of operation

  1. IntelligentMirror gets a client request for a URL.
  2. Check: if URL is not in (RPM, metadata file)
    • Then its none of our business.
    • Let proxy handle it the normal way.
    • Done and exit.
  3. Check: if RPM/metadata is available in cache
    • Stream the RPM/metadata from cache.
    • Done and exit.
  4. Check: if RPM/metadata is not available in cache
    • Download in parallel for caching in some dir and stream.
    • Done and exit.

IntelligentMirror : New method of operation

  1. IntelligentMirror gets a client request for a URL.
  2. Check: if request for rpm
    1. Direct squid to cache the request as http://<same_host_all_the_time>/intelligentmirror/<rpmname>.rpm
  3. Check: if request for deb
    1. Direct squid to cache the request as http://<same_host_all_the_time>/intelligentmirror/<debname>.deb
  4. Done and exit.

So your squid will see every request for an rpm package as a request http://<same_host_all_the_time>/intelligentmirror/<rpmname>.rpm. So, if you happen to request the same rpm from a different mirror, it’ll still be served from cache 🙂

Improvements

  1. No need to check if the url supplied by squid is for rpm or not because storeurl_rewrite_program has an acl controller attached which will invoke intelligentmirror for urls ending in .rpm .
  2. No need to check if the url is already cached or not. No need to worry about the directory where you are going to store the packages. No human intervention is needed in maintaining the cache. Almighty squid is doing everything for us.
  3. No need to worry if the target package has changed because of the resigning or whatever because squid will do that for you.
  4. No need to actually download the package in parallel for caching because squid is already doing that.
  5. No need to worry about the hashing algorithms and storage optimizations for the cached content.

Availability

  1. RPM for Fedora/Red Hat
  2. Source RPM for Fedora/Red Hat
  3. Source Tarball

Install and Configure

The install and configure files should be enough to guide you through the installation if you choose the tar ball way. Otherwise you can always install from rpm from the above link.

Note1: You have to configure your squid to use intelligentmirror as a plugin even if you install via rpm. Check the configure file at the above link.

Note2: StoreUrlRewrite will probably be available in squid-3.1.