securing a server

how to confine an intrusion and limit the damage

By Rainer Wichmann rainer@nullla-samhna.de    (last update: Sep 11, 2008)

Virtual private servers (VPSs) and dedicated servers fill the gap between shared hosting and an own datacenter. This article aims to show you how you can secure such a system, with an emphasis on efficiency rather than paranoia. Suggestions for improvement are welcome.

Securing a VPS or a (self-managed) dedicated server is a challenge that is subject to specific constraints, such as:

  • You have no physical access to the machine (although you are hopefully provided with some sort of 'rescue console').
  • You don't have the resources and personnel to constantly monitor the machine (after all, if you had, you probably would have your own datacenter ...).
  • There's probably only one user, and all she does is administrating the server.

Software always has bugs, and security is never absolute. Therefore, after discussing proactive security, i.e. methods to secure the server, the article will also present some ideas on intrusion resistance — i.e. limit the potential damage in the event of a successful intrusion.

Securing your SSH access

Use public key authentication

There are plenty of ways to secure ssh against brute-force attacks. But if you have no users except one or few administrators, the simplest solution is to allow login with RSA/DSA keys only (see here for details). After generating the keys and putting them in the proper place, you should disable all other forms of authentication in /etc/ssh/sshd_config:

HostbasedAuthentication no
RhostsRSAAuthentication no
PasswordAuthentication no
RSAAuthentication no
PubkeyAuthentication yes
Protocol 2

Then restart the SSH server with service ssh restart. Do not close your active SSH session unless you have verified that you can login!

Root login

Many articles on the web advocate that you should not login as root, and set 'PermitRootLogin no' in sshd_config. The advice is to login as a non-privileged user and then get root, thus requiring to type in a password. However:

  • When you type the root password, each keystroke is sent as a separate packet to the server. This paper describes a possible way to employ this fact in a timing attack on the password. However, there is some debate whether this attack is feasible in real life or just academic.
  • This is not a desktop machine. Typically, you login only for administrative purposes, thus you need to switch to root anyway. And if you use public key authentication (see above), there is no danger of anyone bruteforcing your root password.
  • If you have more than one admin, and want accountability, you can set 'LogLevel VERBOSE' in sshd_config, which will log the fingerprint of the key used for login. Just let each admin use her own key to login as root.

Proactive security

Use secure software, and remove what you don't need

If you really need a web GUI to administrate your server, this article probably is not for you. First, it's a security risk, and second, it limits what you can do. If your provider has installed one by default, you might want to disable it.

Postfix has a good reputation as secure mail server. For IMAP/POP3, you may want to have a look at dovecot.

While most websites run on apache, there are more lightweight alternatives. Lighthttpd has become rather popular. If you want (or need) features that are only provided by apache, you may want to review which modules are enabled by default, and disable all that you don't need. On Debian/Ubuntu, to disable an apache module, remove the corresponding link(s) from /etc/apache2/mods-enabled to /etc/apache2/mods-available.

Don't have compilers or debugging software installed! If there's a debugger available to the intruder, she can attach to any process running under the same UID, and modify its actions. On Linux, the (probably incomplete) list of software not to have includes gcc, gdb, strace, and lsof.

Keep up to date with security updates

In Debian/Ubuntu, in order to keep up with security you need to run apt-get update (which updates the package list), followed by apt-get upgrade (which then updates the installed packages).

Automating updates might help to keep your system secure, but the risk is that your system might break because of a broken package. Only do this if you are running a stable distribution. In order to automate debian security updates, you would first comment out in /etc/apt/sources.list all repositories except for the security repository (to avoid any non-critical updates). Then, run the command dpkg-reconfigure debconf in order to set the debconf frontend to 'Noninteractive'. Afterwards, you can add a line to /etc/crontab to run the update each night at (say) 4:17, like:

17 4    * * *   root    /usr/bin/apt-get update && /usr/bin/apt-get --yes upgrade 

Only expose services that are really needed

Not all services running on your server may need to be exposed to the internet. E.g. you may need a database server (say, mysql or postgresql) for your applications, but you probably don't need to have it listen on the external interface. You may want to add this option in my.cnf (actually, on Debian/Ubuntu this is the default):


[mysqld]
bind-address=127.0.0.1

Likewise, if you install a webmail interface for reading mail, you will need an IMAP server. However, if you normally would use POP3 to read email, there is no need to expose the IMAP server to the world, rather than just listening on localhost:


protocol imap {
  listen = 127.0.0.1:143
  ssl_listen = 127.0.0.1:943
  ..
}

Do not run public services with superuser privileges

While it doesn't harm to mention this point, modern Linux distributions already take care to run standard services (such as web/mailserver) as non-privileged users.

Avoid third-party software, if possible

If you stick to packages provided by your Linux distribution, you can rely on your distributor to provide security updates. If you install third-party software, you need to track it yourself for security issues. With limited resources at hand, this can easily turn into a major headache.

Debian vs. Ubuntu vs. RedHat

Debian has a solid reputation and countless software packages, but packages are compiled without the stack-protector option of the gcc compiler (which protects against buffer overflows on the stack).

Newer Ubuntu versions (at least 7.10) use the stack-protector option. On the other hand, some packages required to run a server are in the universe part of the repository (e.g. spamassassin - who wants to run a mailserver without spam protection?), meaning that the security team won't feel compelled to care about it (though for popular software like spamassassin security updates most certainly will be provided by someone).

RedHat has SELinux, but it is rumoured that many users can't get along with it, and choose to disable it - thus killing the only good reason to prefer RedHat in the first place...

Intrusion resistance — limiting the potential damage

Even with the best precautions, it may still happen that your server gets cracked, and a malicious intruder gets access to it. In the following, ways to contain the intrusion and limit the damage will be discussed.

Intruders usually are not interested to destroy your server (which would reveal the break-in, and render the server useless to them), but rather to keep hidden and abuse your server as long as possible. Typically, they first will download some applications, software, and data from the net, and then proceed with activities like:

  1. trying to elevate their privileges (i.e. obtain root privileges),
  2. modify your web site to infect visitors with trojans,
  3. use your server as a proxy to control other hijacked machines,
  4. use your server to distribute illegal content, or
  5. send spam from your server.

All these actions can be denied in a system that is properly set up, as long as the intruder cannot obtain root privileges.

Set up a firewall to block outgoing connections

You may be used to think of a firewall as something that fends off attacks. This is a useful role in a corporate environment, where there are many machines behing the firewall that need to run services which should not be exposed to the external network. However, here you have one single server; all open ports are open because you want to provide these services; all other ports are closed anyway.

However, in the event of an intrusion, a firewall could be very helpful to limit what the intruder can do:

  • First, the firewall can prevent the intruder to set up additional services. Any user may start services on non-privileged ports, and thus without a firewall, an intruder may e.g. offer downloads on a high port, thus stealing your bandwidth that will be billed to you.
  • Second, the firewall can prevent the intruder from making outgoing connections. This has twofold use: it denies the intruder the capability to download additional software or data to your server (e.g. rootkits/exploits for trying to elevate her privileges), and it stops one avenue of sending spam from your machine.
Linux comes with a stateful packet filter called iptables. As an example, this firewall script will use iptables to set up a firewall that blocks incoming connections on all but a few ports, and blocks most outgoing connections. It uses the owner module of iptables to make sure that outgoing connections can only be initiated by users that have a legitimate reason to do so. E.g. the 'postfix' user (the mailer daemon) certainly should be able to make outgoing connections to port 25 (SMTP), while any other user usually has no reason to do so (if some other user wants to send mail, it's always possible to submit it to the mailer via sendmail(1)).

Deny other ways to upload software to your server

Making an outgoing connection in order to fetch data from (e.g.) an HTTP/FTP server is not the only way for an intruder to get his software into your system. Other options are:

a) Sending data via email

if your mail server is set up to accept mail for local users in /etc/passwd, it would be possible to send email to the webserver account. With postfix, you can avoid this by not having unix:passwd.byname in the list of local_recipient_maps (see /etc/postfix/main.cf), or by routing mail to the webserver UID to /dev/null in /etc/aliases:

www-data:       /dev/null
b) Sending data as file attachments

if some web application allows file attachments, the intruder could attach data of her liking as attachment to a posting. This is not a problem for webmail (since it requires authentication before sending), but other applications (e.g. bug tracking software) may allow attachments from anyone. Unfortunately, it is almost impossible to block this route, because (i) even if the web app checks the file attachment, the check can only occur after the file is on disk, and thus "up for grabs" for the intruder, and (ii) in general, it's impossible to verify that the file is harmless (the intruder could e.g. use simple uuencode/base64, or even steganography, to hide malicious payload).

Audit file system permissions to prevent malicious modifications

Chrooting a server is a popular advice. However:

  • it is cumbersome, unless supported by the distribution,
  • it does not prevent an intruder from modifying content within the chroot jail,
  • it does not significantly limit the intruder if there are scripting languages (perl/php/python/ruby) in the chroot jail, as is frequently required for web applications, and
  • it overlooks the fact that Linux/Unix already provides sandboxing given by the file permissions granted to different users and groups, which is actually more useful for the task.

For these reasons, chroot is interesting only for services that need a very limited chroot only, with a very small number of tools available. E.g. the postfix mail server is comprised of many small services, each of which only does a limited task. Most of these can very easily be chrooted with a switch in the master.cf configuration file.

a) Audit file permissions to prevent malicious modification of web pages

As an example, let us consider the web server. Even if the server itself has no (known) vulnerabilities, you may have web applications running that are vulnerable. Exploiting such vulnerabilities will give an intruder the capability to run code under the UID of the webserver, and may allow her to modify your content in order to infect visitors with trojans.

Usually, the web server does not need to own any directories or files in the web root, neither does it need write permission for directories or files in the web root, be it static pages or scripts. Thus, given proper file permissions, an intruder that has exploited some vulnerable web application and consequentially has the UID of the webserver, should not be able to modify static pages or web applications/scripts. In order to find directories or files owned by, or writeable by the web server, you can use commands like the following two:

sh# find / -user www-data | xargs ls -ld
sh# find / -group www-data -perm /g+w | xargs ls -ld

Content and scripts should be owned by root and be group readable (but not writeable) by the webserver (remember that directories need execute permission to access their content). If files need to be writeable for some reason, they should be outside the web root.

Interesting tidbits about file permissions:

  • Check whether writeable files need to be readable. Writing/appending to files needs only write permission, not read permission, so if you store sensitive data in a flat file, you can do chown root:www-data file && chmod 720 file. If a file is append only, consider using chattr +a to enforce this.
  • Compiled executables don't need read permission to be executable; execute permission is sufficient. Note that this does not apply to scripts, which must be readable and executable.
b) Check that web applications do not trust their database

Web applications, especially ones that provide for user generated content (e.g. a forum, or a wiki), represent a special case. These applications need write access to some database that stores the content (regardless whether it's a real database or just flat files). This implies that an intruder having the UID of the webserver can write arbitrary content to that database.

However, with proper file system permissions, the intruder cannot modify the application that reads the database and generates dynamic web sites from the content. This implies that the potential damage can be limited as long as the application does not blindly trust its own database. I.e. a properly written web application should treat its own database as untrusted, just like content submitted by visitors, and perform the same level of input validation. In this case, an intruder may still be able to alter content, but will not be able to place malicious code (trojans that infect visitors) into web sites dynamically generated by the application.

Deny mail submission to block outgoing spam

In the firewall script discussed above, measures are already taken to prevent an intruder from sending mail directly from your server (by rejecting outgoing connections to port 25 by other users than postfix). However, the intruder still can use the sendmail(1) utility to submit mail locally to your mail server, which will then dutyfully deliver it.

To eliminate this problem, first you may want to configure the authorized_submit_users option of postfix, which allows to define which local users can use the sendmail(1) interface. Typically, this would be the mailing list manager (if you have one running), root (for emails from the cron daemon), your log checking application (e.g. the logcheck package), and eventually the webserver (for email notifications sent by web applications):

authorized_submit_users = root, logcheck, mailman, www-data

Filtering mail from the webserver UID

Unfortunately, many web applications should be able to send mail (e.g. notifications sent by a bug tracker), while at the same time web applications are a major source of vulnerabilities. Hence it is quite likely that an intruder will obtain the UID of the webserver, and it would be useful to limit the ability of this user to send mail as much as possible.

Our goal is to limit the webserver such that it will only be able to send to specific destinations determined by you, or only to local users (the local recipient could be a mailing list, thus this does not limit your ability to let the webserver send mail to external users determined by you).

The postfix mail server can be viewed as a system with several different frontends for accepting mail from different sources, and backends for delivering mail to different destinations. In particular, mail submitted locally (using sendmail(1) is accepted by the pickup(8) daemon, and mail sent to external destinations is delivered by the smtp(8) daemon.

a) How to recognize the UID of the local sender

This is the easy part: postfix conveniently inserts a 'Received' header that includes the UID of the sender, e.g.:

Received: by host.domain.tld (Postfix, from userid 33)
	id BEF5A1C2A60; Sat, 29 Dec 2007 21:38:18 +0000 (UTC)
b) Use postfix to filter for outgoing mail from a particular UID

According to the online documentation, as of version 2.5 postfix has options smtp_body_checks and smtp_header_checks for the smtp(8) daemon, which would allow to filter outgoing mail with postfix. However, at the time of this writing, postfix 2.5 is not included in Debian (stable) or Ubuntu.

c) Use an external content filter

To make sure that locally submitted mail (only) is filtered, you can configure a filter for the pickup(8) daemon in master.cf, like this:

pickup    fifo  n       -       -       60      1       pickup
  -o content_filter=smtp:127.0.0.1:10027
127.0.0.1:10026 inet    n       -       y       -       10      smtpd
  -o content_filter= -o local_recipient_maps= -o myhostname=localhost.mydomain.com

This expects a filter which can talk SMTP, running on port 10027, which receives outgoing mail and passes it on to postfix on port 10026. Of course, you have to have the filter running first! This perl script (PGP signature) is adapted from the spampd proxy, which has been written as a spam filtering SMTP proxy. It allows to define destinations for sender UIDs, as well as a configurable number of arbitrary destinations for a sender. To use it, first unpack the gzipped tar archive, cd into the directory and run make install.

Next, adapt the configuration file /etc/restrictpd.conf to your needs (see man restrictpd, or run 'pod2man restrictpd | man -l -' if you did not install yet). Also, check /etc/default/restrictpd for settings (e.g. on which port to listen, and to which port to forward to).

Finally, create a system user restrict (no login, no shell) with home directory /var/spool/restrictpd and start the script:

sh# adduser --system --home /var/spool/restrictpd --gecos restrictpd --group restrict
sh# /etc/init.d/restrictpd start

Logging and auditing

Your body is equipped with a firewall - the skin -, and just like your server, it is continually under attack. Does your body care to log every unsuccessful attack by any random virus? Definitely not. It does care, however, about successful intruders, which seems a pretty reasonable strategy. Of course, if you are the head of a big IT department, "we need to investigate terabytes of logfiles" is a nice argument to fill in some more positions, but then this article is not for you. So the suggested strategy would be:

  • Install software for filtering your logs, preferentially with a whitelist approach — eliminate what you know to be irrelevant, rather than trying to pick out what is relevant. The logcheck package can be used to scan logfiles, filter out uninteresting messages, and mail the remaining messages to the administrator (best use an external mail account for receiving logcheck messages — option SENDMAILTO in /etc/logcheck/logcheck.conf). Install 'logcheck-database' for a set of rules to eliminate irrelevant messages. Check files in /etc/logcheck/ignore.d.server for examples on adding custom rules (all rules in all files are checked, so just add your own file with your rules).
  • Ignore external attacks that are obviously unsuccessful. There's no point acting on them - they come from hijacked machines in bot nets, their owners are clueless, and the next attack will come from some completely different machine:
    • the countless brute force ssh login attempts on your server, which are fruitless if you use public-key authentication,
    • incoming connection attempts rejected by your firewall (you may have noticed that the simple firewall script shown above does not log them - it's just a waste of disk space),
    • the spam rejected by your mail server.
  • Watch out especially for all activity on your server that should not happen:
    • outgoing connections rejected by the firewall,
    • attempts to send mail from your machine by users that should not send mail,
    • unusual commands invoked by the UIDs/users of your public services (postfix, dovecot, www-data, ...) — use process accounting to log commands. On Debian/Ubuntu, install the acct package. Afterwards, lastcomm username will give the list of commands last executed by user username. You could e.g. modify the log rotation script /etc/cron.daily/acct to investigate the log and mail anything suspicious just before log rotation. This simple perl script gives an example how you can check the process accounting log for unusual activity.
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Germany License.