Tunneling through a gateway server via SSH
Some of the servers I manage are hidden behind a corporate firewall and hence not accessible from the open internet. In order to access them from my office at home (and hence be able to run my Ansible playbooks as-is), I need to be able to tunnel through to the servers using SSH. Since it isn’t immediately obvious how to do this, I thought I’d write up the process I use.
In the situation I’m describing here, I’m fortunate enough to have access to a gateway server which then allows me to tunnel through to the server on the other side of the firewall. The situation looks something like this:
Although one could access the destination server by ssh-ing into the gateway
server and then ssh-ing into the destination server, the goal here is to
actually run ssh
commands locally as if one had direct access to the
destination server. In particular, we want to run an Ansible playbook for
the destination server, and for that we need a direct ssh
connection to the
destination server. Therefore, we need to build a tunnel from the local
laptop system via the gateway server to the destination server.
We need to create the tunnel in two parts: first we make a tunnel from
localhost
(my laptop in this case) to the gateway server, then we build a
tunnel from the gateway server to the destination server. Doing this will
allow us to run ssh
commands on localhost
as if we had direct access to
the destination server; the trick is to make the network packets go through
the tunnel. The inspiration for this solution came from an answer
to the Stack Overflow question about creating an SSH tunnel via multiple
hops.
Please note that there is more than one way to achieve this; the solution presented here is the one I prefer to use because the connections between hosts are secured.
Going about things the long way
Let’s go through the steps in detail so that we can see how all of the individual parts work.
Open a terminal session specifically for the tunnel (the ssh
process and
connection for the tunnel need to stay open for the period of time that
you’re using it) and open the tunnel from localhost
to the gateway server:
$ ssh -A -L 1234:localhost:1234 <username>@gateway-server.example.com
The -A
flag ensures that the authentication agent connection is forwarded
to the remote host. This will allow connections from localhost
to the
destination server to occur without needing to enter a password for each
connection attempt.
The -L 1234:localhost:1234
part of the command connects the local port
1234 on localhost
to port 1234 on the gateway server. We use a port number
higher than 1000 here because these port numbers are unprivileged and hence
able to be used by “normal” users. The “L” in the -L
flag means one is
creating a local port. There’s also a -R
flag, which one can use to set
a remote port.
Because I have a different username on the gateway server to that which I
use on my laptop, I also have to specify my username when connecting to the
gateway server (hence the <username>@
bit).
Running this command will open a login session on the gateway server; now we’re ready to create the tunnel from the gateway server to the destination server.
Note that you should have already shared your SSH keys with the remote
server(s) by using ssh-copy-id
, e.g.:
$ ssh-copy-id <username>@gateway-server.example.com
We now create a tunnel from port 1234 on the gateway server to port 22 on
the destination server in order to complete the tunnel from localhost
to
the destination server:
$ ssh -A -L 1234:localhost:22 <username>@destination-server.example.com
Running this command will open a shell on the destination server. We’re now
linking port 1234 on the gateway server (which is now our localhost
because we’ve already logged into the server) to port 22 on the destination
server. The main advantage of setting up these two tunnels is that the
connection is secured the whole way (apart from the fact that anyone on the
gateway server could use the connection from port 1234 to port 22 on the
destination server, which might be problematic depending upon the use case;
see the comment in an SSH tunnel via multiple
hops).
A diagram should hopefully make the situation clearer:
Now we can connect directly to the destination server from our laptop at
home by ssh-ing to port 1234 on localhost
on our laptop. Opening a new
terminal window (we need the tunnel session to stay open in the previous
terminal window) and enter:
$ ssh -p 1234 <username>@localhost
and violá! We can log in to the destination server directly from the laptop.
This connection works because all packets sent to port 1234 on our local machine are forwarded over the tunnel to the gateway server and then forwarded automatically over the open tunnel from the gateway server to the destination server. Yay!
Note that it might be necessary to specify a username for the connection if
your username is different on the destination server to that which you use
on your local system. The <username>
specified in the above command is
the username for access to the destination server.
Now that we know about the details about how to create a tunnel via a gateway server, can we streamline the process a bit? It turns out, yes we can!
Chaining the tunnel command
The first question that comes to mind is: do we really need to open the first
shell session on the gateway server? The answer to this is no: one can
chain the ssh
calls like so:
$ ssh -A -L 1234:localhost:1234 <username>@gateway-server.example.com \
ssh -A -L 1234:localhost:22 <username>@destination-server.example.com
where I’ve split the command over two lines to aid readability.
Although this will still log you in to the remote server, you’ll find that you won’t have a shell, mainly because of this error:
Pseudo-terminal will not be allocated because stdin is not a terminal.
This issue can be fixed by using the -tt
option to force assignment of a
tty to the connections (the tip for this solution was found, as usual, on
Stack Overflow: Pseudo-terminal will not be allocated because stdin
is not a terminal):
$ ssh -A -tt -L 1234:localhost:1234 <username>@gateway-server.example.com \
ssh -A -tt -L 1234:localhost:22 <username>@destination-server.example.com
The next question that comes to mind is: do we really need to open a shell
session when building the tunnel? And the answer to this question is also
no: by using the -N
flag, we can avoid running a command and hence avoid
creation of a login shell (which also means we don’t need the -tt
option
anymore):
$ ssh -A -L 1234:localhost:1234 <username>@gateway-server.example.com \
ssh -A -N -L 1234:localhost:22 <username>@destination-server.example.com
Note that we only need to use the -N
option for the final ssh
command
(i.e. from the gateway server to the destination server); it isn’t necessary
in the first ssh
command (from the local system to the gateway server).
Note also that there won’t be any output from the terminal to let you know that the tunnel has been created. Nevertheless, you can just run
$ ssh -p 1234 cochrane@localhost
and see if you can log in to the destination server from your local system.
Can we simplify this further? Well, we can do a bit more, yep.
Setting up the tunnel configuration in the SSH config file
It’s possible to move some of the command line arguments into your
~/.ssh/config
file. For instance, the hostname, username, agent forwarding
and local port forwarding can all be specified like so:
Host gateway-tunnel
Hostname gateway-server.example.com
User <username>
ForwardAgent yes
LocalForward 1234 localhost:1234
where I’ve given the command an alias of gateway-tunnel
, which one would
call run like this:
$ ssh gateway-tunnel
But this only gets us to the gateway. Let’s get to the destination server
in one go by using the ProxyCommand
option:
Host destination-tunnel
Hostname destination-server.example.com
ProxyCommand ssh <gateway-username>@gateway-server.example.com -W %h:%p
User <destination-username>
ForwardAgent yes
LocalForward 1234 localhost:22
Now we can run
$ ssh destination-tunnel
and we’ve opened the tunnel to our destination server and we can access it via port 1234 on our local system as before:
$ ssh -p 1234 <username>@localhost
However, we can go one step further, because as of OpenSSH 7.3, there now
exists the ProxyJump
option (which can be specified via -J
on the command
line). Therefore, one could also do this to open the tunnel:
$ ssh -L 1234:localhost:22 -J <gateway-username>@gateway-server.example.com \
<destination-username>@destination-server.example.com
which can be made into an alias in ~/.ssh/config
like so:
Host jump-tunnel
Hostname destination-server.example.com
ProxyJump <gateway-username>@gateway-server.example.com
User <destination-username>
ForwardAgent yes
LocalForward 1234 localhost:22
and hence we can open the tunnel with:
$ ssh jump-tunnel
Note that we’re back to executing a shell and logging in to the destination
server. Can we avoid that with an option in the ~/.ssh/config
? It turns
out that that’s not directly possible. However, we can go one step better
and (mis)use the RemoteCommand
option:
Host jump-tunnel
Hostname destination-server.example.com
ProxyJump <gateway-username>@gateway-server.example.com
User <destination-username>
ForwardAgent yes
LocalForward 1234 localhost:22
RemoteCommand echo "Tunneling through gateway-server.example.com; use Ctrl-C to terminate"; sleep infinity
When running the alias ssh jump-tunnel
, we’ll now get the output
Tunneling through gateway-server.example.com; use Ctrl-C to terminate
to let us know that a tunnel has been created and will tell us how to terminate it once we no longer need it. A comment to an answer to the Stack Overflow question Setup -N parameter in SSH Config File provided the hint for this solution. This I find quite nice because now we get feedback that the tunnel has been set up and is running.
Running Ansible playbooks through the tunnel
Now it’s possible to do what I set out to do in the beginning: run my Ansible playbooks on the destination server without requiring a direct connection to it.
I open the tunnel in one terminal:
$ ssh jump-tunnel
Tunneling through gateway-server.example.com; use Ctrl-C to terminate
and ensure that I have an SSH alias for our destination server in my
~/.ssh/config
:
Host dest-server-local
Hostname localhost
User <username>
Port 1234
and in a separate terminal session, I run the Ansible playbook:
$ ansible-playbook destination-server-config.yml --ask-become-pass \
--limit dest-server-local --diff --check
where I’m using the hostname dest-server-local
rather than the full
hostname destination-server.example.com
because we can’t get to it
directly; we can only access it via the tunnel, hence we use the name of the
SSH alias.
And that works as I’d hoped. Nice!
Well now, this is embarrassing!
I first stumbled across the ProxyJump
option while writing up this blog
post and decided to describe its use as it might come in handy for
someone. Well, it turns out that ProxyJump
completely removes the need
for the complicated tunnel setup! Oops! If we make an SSH alias for the
jump connection:
Host dest-server-local
Hostname destination-server.example.com
ProxyJump <gateway-username>@gateway-server.example.com
User <destination-username>
ForwardAgent yes
then we have the SSH connection required for the Ansible playbook to run directly via the jump connection! This is much simpler We don’t need to open a tunnel in a separate window; we can just run the playbook directly:
$ ansible-playbook destination-server-config.yml --ask-become-pass \
--limit dest-server-local --diff --check
Wrapping up
When using SSH, just like in Perl, there’s more than one way to do it! It’s good to know about how to create tunnels between various servers if one needs to simulate direct access even if the destination is behind a firewall. It’s also good to know that sometimes there’s a much simpler solution available.
Support
If you liked this post and want to see more like this, please buy me a coffee!