Software, technology, sysadmin war stories, and more. Feed
Wednesday, June 8, 2011

A half-baked idea for proxying SSH connections

Web hosting companies tend to have a lot of computers. All of those computers have login details, and for the most part, they are visible to anyone who can work support tickets for those machines. The fact that someone meaning to do evil deeds could grab them en masse has not been missed, but solutions still seem to be rare. Here's one I came up with at lunch one afternoon just as a proof of concept but never used.

First we need to state some assumptions to set up the problem. There are thousands of machines and each one has a magic account on it which is used by support personnel. We do not want the humans who are working on these machines to have the passwords to the machine, whether for that account or for root itself. This means logging in with some other method and in this case it means SSH keys and sudo for root access (sudo password issue to be resolved separately).

Note that ssh keys are just as scary as passwords if compromised, so there's another assumption: the people who are logging in don't get to see the private keys. They just get to use them to get connected. Oh, and for the sake of this exercise, assume we want to make it so that "ssh username@some.hostname" still works as it does now -- there are in-browser helper scripts we don't want to break.

What a mess, right? Well, I came up with an approach. First, grab a swath of RFC 1918 space big enough to have one address per machine you care to cover, and come up with a mapping for each host. Then route that entire block on your internal network to a host running Linux. On that host, do some iptables magic with -j REDIRECT to a local port. Then you run a hacked-up sshd on that port.

The special sshd does getsockopt(SOL_IP, SO_ORIGINAL_DST, ...) to find out the magic IP address you used to get there, then it stores that in the environment. Then it kicks off that user's "shell", which is just a special-purpose program which looks in the environment for that value. It looks up the real host name for that IP, finds the key, then execs ssh with the right key file and target IP address. Done.

Is this a hack? Sure. It's a huge one. It does manage to avoid key or cert management for all of your users, since when you want to kick one out, just disable them on the bastion/proxy hosts. It also means you do a minimum amount of change on your customer machines, since they just need to have something dropped into authorized_keys, and their existing sshd will handle that just fine.

It should also be pointed out that this breaks stuff like scp or sftp (or running commands directly) but the assumption is that you are purposely providing nothing but interactive terminal access. It does also give you the opportunity to do something else interesting: logging.

Consider this: instead of having the "shell" replace itself via exec(), have it open a couple of pipes, then fork(), and exec() in the child. Then it just needs to throw characters back and forth. Those characters can also be sent somewhere else for logging. This lets you keep tabs on what's happening on your customer machines, and who's doing it.

There are far better ways to handle support issues, but that's a story for another time.