SSH audit and secure settings

So, there’s a tool called ssh-audit which is like the SSL Labs of SSH. The first run against some servers showed a whole bunch of “fails” due to issues with use of weak Key Exchange algorithms, Host Key Algorithms and MACs (Message Authentication Code algorithms).

After a bit of fiddling around, you can get a much more secure setup using the config below:

KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1

HostKeyAlgorithms ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-ed25519

MACs umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com

There’s still a couple of warnings around using SHA1 and potentially a weak/bad modulus size with SHA256, but it’s a lot better than the default configuration.

After adding the files to the config, you can test the config with:

sshd -t

and the restart and voila! you should have a much more secure SSH server.

I put the above into a bunch of tasks into an Ansible playbook:

- name: Ensure SSH settings are in config file
  tags: ['ssh-audit','ssh']
  become: true
  blockinfile:
    path: /etc/ssh/sshd_config
    block: |
      KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
      HostKeyAlgorithms ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-ed25519
      MACs umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com

- name: Verify settings are not going to break SSHd
  tags: ['ssh-audit','ssh']
  become: true
  command: sshd -t

- name: Restart SSHd
  tags: ['ssh-audit','ssh']
  become: true
  systemd: name=sshd state=restarted

- name: Run the ssh-audit against the server
  tags: ['ssh-audit','ssh']
  connection: local
  shell: "./ssh-audit.py -n -b -l warn {{ ansible_ssh_host }}"
  register: sshauditoutput

- name: Output the ssh-audit results
  tags: ['ssh-audit','ssh']
  debug: msg="{{ sshauditoutput.stdout_lines }}"

 

 

How to print out SSH key fingerprints

This is useful for comparing keys from GitHub or from SSH.

To print out keys in SHA1 format:

ssh-keygen -l -f ~/.ssh/foobar.id_rsa

To print out keys in MD5 format:

ssh-keygen -l -E md5 -f ~/.ssh/bazbarn.id_rsa

Remember to change the filename as required.

Get the SSH fingerprint of an SSH server

If you’ve ever tried to connected to a new server over SSH, you would’ve seen a message similar to the following:

# ssh iridium
The authenticity of host ‘[foo]’ can’t be established.
RSA key fingerprint is a2:b9:c5:d3:e5:fc:a6:b3:c7:da:e1:f0:ac:b9:c9:d5.
Are you sure you want to continue connecting (yes/no)?

Then you may have wondered, “Well, what *is* the fingerprint of my server supposed to be?”. Basically, in order to do the authentication of the host, you should run the command below (at SSH server install time, or over a “secure” channel) in order to get your hosts SSH fingerprint:

# ssh-keygen -lf /etc/ssh/ssh_host_rsa_key.pub
2048 a2:b9:c5:d3:e5:fc:a6:b3:c7:da:e1:f0:ac:b9:c9:d5 root@foo (RSA)

You should then be able to compare the two fingerprints to determine whether the server you’re connecting to is in fact the one you’re trying to connect to and isn’t some sort of honeypot.

Command to delete a particular host from known_hosts

Occasionally (especially in the cloud world, where instances are cattle), the SSH fingerprint for a host changes. When this happens, you will see a warning.

If the warning is expected, the usual remedy is to delete the offending key from your “known_hosts” file (typically found under ~/.ssh/known_hosts). However, when you need to do this across a bunch of machines and you don’t know what line number the host will be on, on each machine, the following command might be useful:

sed -i -e ‘/[webserver-03.example.com]:2222/d’ ~/.ssh/known_hosts

It deletes any line which matches the host “[webserver-03.example.com]:2222” in the default “known_hosts” file.

Setting up a VPS – Part 1 – Hosting, SSH Security and ntp

Got a VPS from an outfit here in NZ called HostingDirect. Opted for Ubuntu 64-bit edition with the Small VPS package (128MB RAM, 10GB disk, 1 IP address). Also got domain registration (cheapest in NZ) and hosting with them which comes with free website hosting, which is nice.

The configurable options in the VPS setup allowed you to select LAMP setup for $150, Email server (SMTP, POP3, IMAP) for $60 and Security Tools for $45. I thought these prices were a bit steep, especially since the Small VPS package only cost $25/month after GST. But then I reminded myself what I charge for setting up such systems and it made sense. I didn’t opt for these services, preferring to set them up myself.

So the VPS was provisioned in the afternoon on the 28th but I didn’t have time to start configuring it until that night when I came home. By time I started having a look at it, there were already signs of brute force attacks on the ssh server. So the first thing I did was to create a new non-root user and add him to the ‘admin’ group which was already setup in the sudoers file (mimicking the typical Ubuntu setup). From here I disabled the root ssh login and changed the ssh port to 222. Later I changed the ssh port back to the standard 22 and installed a great new piece of software I found called ‘fail2ban‘ which bans login attempts for a period of time based on the number of unsuccessful login attempts.

Before sorting out the ssh server and fail2ban, I did the obligatory ‘apt-get update’ followed by an ‘apt-get upgrade’ which all ran fine. I also did a check on the version of Ubuntu and kernel, with the follwing results:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 8.04.2
Release: 8.04
Codename: hardy

$ uname -a
Linux example.org 2.6.24-23-xen #1 SMP Mon Jan 26 03:09:12 UTC 2009 x86_64 GNU/Linux

So I ended up with Ubuntu 8.04 LTS 64-bit version, which is exactly what I wanted. Shopping around for NZ VPS sellers, I found that a lot of them offered Ubuntu 7.10, which I found strange. I would think more people would prefer the long term release, maybe something to do with stability issues of each distribution running on Xen.

The next thing to set up was the ntp deamon, whch was quite straight forward and only involved adding the line ‘server nz.pool.ntp.org’ to the ‘/etc/ntp.conf’ file and restarting the ntp daemon.

The VPS also came with access to XenShell, which is a way to administer your VPS through Xen (kind of like VMWare’s server console). I’ve never worked with XenShell before so I’ll have to look for a good tutorial to figure out how to make use of this tool.

That’s all for today, it’s late now and tomorrow I’ll start setting up Postfix and all the neccessary extras, a task which it is much better to attempt with a clear head.