Secure Infrastructures -- Make It Hard to Penetrate, Hard to Expand

Last week, we looked at infrastructure security from a specifically Unix perspective; this week, we go both wider and deeper as well as offer some some best practices for management.

Too often, enterprises focus on the gritty details of host-specific security issues rather than overall design, which has tremendous impact on security.

Discuss this article in the ServerWatch discussion forum

Let's back up a bit, and examine the entire set of servers at once. Often, admins are focused on the gritty details of host-specific security issues, and forget to look at an overall design. The design of an entire network can make a tremendous difference in the end.

In the previous article, we defined three types of servers, which can be further broken down according to needs. They were:

  • Internet-accessible public servers
  • Log-in servers, which allow non-admin users to log in
  • Everything else, such as the MySQL, internal LDAP or NFS servers, which are only reachable from internal networks

Based on how we've defined these servers, the network layout and firewall rules to be applied are obvious.

The question now is how to manage them so they remain secure. This is the part of our design that's difficult because one poor decision can spell disaster.

At a higher-level view, some servers, even in the set of "everything else," are going to be more important than others. One or more servers will need to be "trusted" by all others so automated changes can happen. Account creation, host integrity monitoring a la Tripwire or Samhain, and even configuration file backups must be configured and maintained from a server able to access other servers as root.

Such a server, hereafter referred to as a "master server," should have administrator accounts available only for login access. Administrators' account passwords must be different from those on all other servers, and the master should provide no services to the outside world.

Ideally, there should be zero chance of a compromised public server leading to a compromise of the master. When a seemingly unimportant machine is compromised, a hacked login or ssh binary will be part of a rootkit, which can lead to exposed user account passwords. This is why sudo is a bad idea: It gives user passwords root access. A compromised su binary can disclose a root password too, and that's why a master server is so important.

The master server should be able to ssh into all other servers as root, but only via an ssh key. Password-based root logins can never be allowed via ssh. If the master is compromised, then yes, every server is too. That's why a master is a fortress, running only the ssh service, and connecting only to other machines, not the other way around. Configuration file backups, host integrity databases, etc. can all be stored on the master. The idea is to never, ever run su or sudo on a potentially insecure machine, instead, simply ssh in as root from a secure server.

Publicly accessible servers are mostly vulnerable due to the applications they run, but login servers also pose problems, and they are vulnerable in many more ways. Your users, be they developers, students or customers, don't care about security. They will run whatever they get their hands on, including SQL servers, PHP-based Web applications with poor security records, and anything else that seems useful. When unknown users make their way in through these holes, you'd better be up-to-date on operating system patches.

Never, ever run su or sudo on a potentially insecure machine, instead, simply ssh in as root from a secure server.

Patching the operating system isn't optional, nor is it a leisurely activity. To reiterate: All servers must be immediately updated when a security update is available. This includes virtual servers as well. It is extremely trivial to gain root access on Unix servers that are patched only on weekends because exploits come out very quickly after the updates that fix them. There are also brand new exploits; those are the ones to fear. SELinux, or an appropriately configured Unix machine can go a long ways toward preventing these exploits from working. If you're unlucky enough to be hit with a zero-day attack, then the best you can hope for is an overall secure infrastructure.

Insecure servers that possibly allow user logins are special in more ways then may be apparent at first glance. Assuming shared home directories are required, it may be difficult to support the broken-up environment described here. NFS shares exported to insecure clients must be carefully scrutinized — especially when developers or researchers require root on their machines.

Given that NFS implies zero security, granting access to NFS shares for an uncontrolled client is quite scary. Essentially, you must assume everything in the shared filesystem has the potential to be compromised, since root on the other side can easily su to anyone who happens to own files. The old standard workaround is to move these types of shares to their own partitions, and then share it with the evil client. There's kerberized NFS, which is a pain to set up, and there's also some alternative file systems that provide a bit of help. AFS comes to mind, but if you need enterprise features, you'd better stay away. It doesn't support snapshots or compatible ACLs, among other things.

We could carry on for pages and pages about best practices and pitfalls. The general idea with an infrastructure-wide view is to minimize your risk in two manners: hard to penetrate, and hard to fan-out once they're in. With proper monitoring in place, you should be able to detect an intrusion quickly and put a stop to it.

It doesn't matter if you have 200 or 2,000 servers, the general principles are the same. It may seem simple, but it is often taken for granted in the heat of configuring a new or damaged server. Some key things to remember:

  • Minimal services on exposed servers
  • Minimal number of exposed servers
  • Master servers from which to SSH
  • Extreme caution with what is given to NFS clients
  • Updates updates updates

This article was originally published on Enterprise Networking Planet.

This article was originally published on Apr 26, 2007
Page 1 of 1

Thanks for your registration, follow us on our social networks to keep up-to-date