Jason’s musings

I just need a little time

Archive for the ‘Computing’ Category

SSH magic

with 4 comments

I’m currently bedridden at home from a nasty dose of the flu and therefore have to use ssh to connect to my machine in the lab.

This machine is behind a firewall however, requiring the use of an intermediary SSH server. So typically you’d do:

CLIENT>  ssh GATEWAY
GATEWAY> ssh DESTINATION

While this can be satisfactory enough for most uses, it doesn’t make copying files any easier, and my usual course of action was to copy the file to the gateway machine and then copy it from there to my laptop.

Today I got a little tired of that and decided to fix it.

This exerpt from “SSH: The definitive guide” proposes that you either use a remote authorized_hosts file command from the gateway to connect to the destination or to create a manual tunnel like so:
# ~/.ssh/config on client C
host S
hostname localhost
port 2001


# Execute on client C
$ ssh -L2001:S:22 G

# Execute on client C in a different shell
$ ssh S

This works well enough, but annoying that it required the manual creation of the tunnel. As for the authorized_hosts command method, it would require creating a new key for the client to use to make each connection to the gateway in order for the gateway to know which machine to connect to. (Doesn’t matter if you don’t understand what I mean) – Too much effort.

Instead I started thinking about using ssh_config to create the tunnel automatically before the connection:

~/.ssh/config:

HOST DESTINATION
hostname GATEWAY
ControlPath=none
LocalForward 2002 DESTINATION:22
PermitLocalCommand yes
LocalCommand ssh -p 2002 localhost

However, this didn’t work, and I didn’t figure out why it couldn’t get past the key exchange.

Another solution I considered was to use something called ProxyCommand, which I already use to connect to machines using HTTPS and SOCKS proxies. So I wrote a script that will make an ssh tunnel through the gateway and connect to the remote machine using the handy ‘nc’ utility.

~/.ssh/config:

HOST [DESTINATION]
ProxyCommand ~/bin/sc 2002 %h %p [GATEWAY]

~/bin/sc:

#!/bin/sh
#$1 == local port $2 == far away host $3 == far away host port $4 == gateway host
ssh -oControlPath=~/.$2$4 -f -N -L$1:$2:$3 $4
nc localhost $1

This will always use the same tunnel to forward a connection to the ssh server on the destination machine, saving resources. There was however the issue that ssh would constantly try to create the tunnel at each invocation, and the ControlPath was a neat solution to avoid that (if there is already exists a tunnel it silently does nothing). The only problem with this technique is that I couldn’t come up with a good way to garbage collect the unused tunnels. Perhaps a file locking based technique, using a counter (similar to reference counting) would work well. But it would be a much bigger project than I wanted.

I realised that using nc on the gateway machine would also work:

~/.ssh/config

HOST [DESTINATION]
ProxyCommand ssh [GATEWAY] nc %h %p

However this unsatisfactorily left nc processes running on the gateway machine, probably due to unimplemented shutdown message handling in some part of the network stack.

Both of these solutions were nice that they allowed the client to connect directly to the ssh server on the destination, enabling the use of ssh keys, forwarded ports, and preventing man-in-the-middle attacks. However, the nc utility is not exactly meant to be used in a production setup. A proper solution to the whole problem would involve modifying the sourcecode of ssh, to allow specifying a gateway in some fashion.

Update: Looking at the  Git tips page I noticed that the following works:

Host *.foo.internal
     ProxyCommand ssh gateway.foo.com exec nc %h %p

I really should have realized that would work. This is the perfect solution.

Advertisements

Written by jasonmc

April 16, 2008 at 12:09 am

Posted in Computing

Tagged with

I was there

leave a comment »

I attended this great talk at the Computer History Museum in Mountain View, CA while I was in San Francisco in March.

Written by jasonmc

April 14, 2008 at 12:41 am

Posted in Computing

To be on a cruddy campus network

leave a comment »

What’s been bugging me lately?

The disappointing residence network setup in TCD. They decided to use NAC (Network Access Control) technology to ensure “endpoint compliance” (manufacturer buzzword). What does that mean? It means that computers that want to connect to the network are placed in a quarantined VLAN and have to run software to check their computer for patches, anti-virus, etc. Next, on every boot, you are in another quarantined VLAN and must authenticate using your username and password, which then switches you to a more open VLAN (although it is still a private network). Also, periodically, you are placed on yet another VLAN and must ‘remediate’ using the software, again checking for patches and anti-virus, almost the same thing as registering.

In theory, like all things, it sounds like a great idea. Enabling people to connect themselves to the network. However, there are many problems with the system, and last weekend I had no net access because of it.

Let’s begin with registration, the software ‘works’ on windows, mac, and linux. Personally, I run a linux box on the network, and the software is actually just a script, which contains within it a gzipped binary! Once the script is run, it deletes itself! The binary is i386, and therefore cuts out all other users of linux systems (even amd64 if they don’t have 32 bit glibc). Last week, I had to remediate using the script, so I thought I’d run it in a knoppix virtual machine, as I definately do not want to be running random binarys on my authenticated software only Ubuntu box. I decided to do a packet trace of what it does:

GET /remediation/common/SMARegistration.jsp?regMethod=LDAP&uid=[intentionally blaned]&defaultUrl=http://NESSUS1:8080/remediation/Success.html&hw_ip=&mac=[intentionally blaned]&hw_desc=eth1&hw_ip=&mac=[intentionally blaned]&hw_desc=eth0&&hw_name=localhost&os=Linux%202%2E6%2E15-27-amd64-k8%20%231%20SMP%20PREEMPT%20Sat%20Sep%2016%2001%3A57%3A42%20UTC%202006%20x86_64&deviceDesc=Linux+Client&serverIP=tcd-rem.org HTTP/1.0

User-Agent: BSC Agent
Yes, that’s right, all it does it send a HTTP GET request! That request then sends back a status html page which is displayed in a web browser.

This means that anyone could simply just send a well formed request and be registered/remediated, even if their box is a malware infested windows machine. In my opinion, this is not security. Sure, perhaps users with viruses won’t ever know this, but that’s still security by obscurity.

My own problem was that the mac address the script sent back would be different from that I registered with, and that seemed to confuse the crapware that the system runs. Thankfully however, the computing service people are very friendly and the problem got solved.

There are many more complaints about NAC technology being a network manager rather than a security system

The vendor that our people went with is Bradford networks (take a look at the other Universities that use them, not exactly world class). I don’t want to know how much it cost. Strangely, I think they might be ripping off Cisco, as the software they provide is called CSA.exe/CSA.sh which is quite similar to Cisco security agent.

Ok, so other than it not being secure, and troublesome in fringe cases, what is wrong with the system for the average user. Well, the authorization system is a big pain in the butt, every time you boot your computer, you must sign in, and wait almost a minute for the VLAN to be switched and to be assigned a new IP address. It would be interesting to calculate the collective amount of time wasted waiting to connect to the network. That is not what computing is about.

Some other complaints about computing in college:

  • Requiring XP Professional, but not even using active directory (do they have some sinister deal with Microsoft?)
  • Switching from open source well performing products to expensive one-box appliance solutions, for instance the proxy server went from Squid to a Blue Coat systems proxy
  • Sending email off to Microsoft to be scanned for spam, again expensive
  • Tiny email quotas 60MB (compare with Google Mail’s ~3000MB and it’s a free service)
  • Restrictive firewall rules, yet they still allow all udp traffic inbound
  • Being very restrictive about who can have a website under the trinity subdomain. Compare with say http://www-tech.mit.edu (although this might be related to awful Irish libel laws)
  • Not evaluating Vista (even though it came out last November), and requesting manufacturers to supply laptops to students with XP

So, what is the major problem here?

If Trinity seriously considers itself to be a world class University, then it needs to start acting more like one in terms of internal infrastructure. Spending lots of money on third party solutions just doesn’t really cut it. I don’t know how good the is service people are at their jobs, but what I definately think is needed is to divert the money to bring in some top notch network engineers, and act a little more like the bigger Universities do.

Update: They now plan to roll the service out to the wireless network (which is already based on LEAP ANYWAY!); to enable lusers to connect without attending a ‘clinic’ (their jargon, which happens to be a member of the HELPDESK institute, what ever the fuck that is). This is even more shit, as often around campus, one wants to just flip open the laptop to check something real quick. Bastards.

Incidentally, LEAP is an unsecure bitch. I’d use another system, but they only use Cisco, and Cisco use LEAP, so I’m forced to be potentially realeasing my username and password evertime I connect. What fucking good is a NAC in that scenario anyway. Stupid fucks.

I used quite a lot of swearwords there, but I have to show my now comtempt for the situation. If an attacker gets my username and password, it would basically allow them to completely steal my identity, within and without college.

Written by jasonmc

February 5, 2007 at 11:59 pm

Posted in Computing

Fun with lisp

leave a comment »

Niall and I were recently looking at support for functions that return functions in different languages (our test being a function that takes a function and returns the derivative function).

Here is the python he was talking about (not using lambda would make it a lot more readable):

def D(f):

    return lambda x: (f(x+0.0004)-f(x))/0.0004D(lambda x: x**2)(3)

Since I’ve been learning lisp recently, from Graham’s ‘ANSI Common Lisp’, I decided to have a go at it in Lisp too. Currently I don’t know of any way to avoid using funcall.

(defun D(f) #'(lambda(x) (/(- (funcall f (+ x 0.0001)) (funcall f x) ) 0.0001) ) )(funcall (D (lambda(y) (* y y))) 3)

Update: I only noticed the section on closures in the book 2 days after spending time doing this. D’oh.

Written by jasonmc

January 24, 2007 at 11:31 pm

Posted in Computing, Programming

Python implementation of “The Generation of Optimal Code for Arithmetic Expressions”

with 3 comments

We recently covered the paper “The Generation of Optimal Code for Arithmetic Expressions” in compiler design class, so I thought I’d implement the first two algorithms described in it in Python.

The one and two functions are the names that he calls his procedures.

Read the rest of this entry »

Written by jasonmc

December 3, 2006 at 1:24 am

Posted in Computing

TextMate Blogging

with 3 comments

This post, sent using the awesome OSX editor that is TextMate! I am behind a proxy server (actually using Authoxy), so I had to hack one of the Ruby files that the Blogging bundle uses. In blogging.rb I used the following statement:

 @client ||= MetaWeblogClient.new(@host, @path, 80, 'localhost', 8080)

If you need authentication, the next two parameters in that function call are user and password. I might look into making this a UI thing or rather, get it to retrieve proxy settings from System Preferences. However, I notice that the Blogging bundle doesn’t even store weblog passwords in the keychain.

Written by jasonmc

November 4, 2006 at 9:31 pm

Posted in Computing

The week of fixing compilation errors mostly

leave a comment »

My last post was complaining about QCA2, (wasn’t qmake’s fault!) that turned out to not be such a big problem after all. The linker (when compiling the tools) was looking for the qca library in the system directory, which of course contained QCA1. I solved it very ungracefully by writing over the system qca lib temporarily with a symlink to the qca2 lib. That worked. Although, now it even works without the need to do that, which i *think* is because of the fact that the lib is in /usr/local now. Although I can’t remember if that was the reason…

I went on to compile the latest dev version of psi from the source repository, which of course had problems… but thankfully it only required a ‘darcs pull’ (cvs update equivalent) the next day to solve the problem. There’s some nice stuff in the latest psi.

Went on then to compile various bits of Justin’s code. Had multiple compile errors but nothing to serious. Irisnet is a proof of concept tool that he wrote that does _presence._tcp service announcement, and ambrosia is a proof of concept jabber server that uses iris. Iris is very powerful

Also learned more about Qt this week, the whole signals and slots paradigm is pretty nifty! I’ve definately learned quite a bit from all the compile errors I’ve been getting too, both general Makefile and Qt specific.

Oh, I got the Google surprise! Hopefully nobody will read this and doesn’t already know!

Google surprise

And this is where I work and will be for only 1 more week…:

Where I work

Written by jasonmc

June 20, 2006 at 12:28 am