[ Main / Projects / Docs / Files / FAQ / Links ]

Motivation and Introduction

UNIX security is a challenging topic that often does not get the attention it should be given. This document is a work in progress. Anything in italics may not be a serious statement.

I've tried to remain practical rather than theoretical. My background is such that I will often digress into theoretical issues. I will, however, attempt to spare readers from overly indulgent digressions into physics, as I know that most readers probably do not enjoy the mathematics involved.

Basics

Security is fundamentally about trust and knowledge. To become secure, one needs to know the methods that an attacker may use. Equally, one must know how to negate those methods. Whenever a choice is made to rely upon a tool or program, the author of the tool or program is implicitly trusted, as is the distributor from which the program was acquired.

I don't go into great detail about cryptography issues here. It is a complex topic that requires a great deal of theoretical foundations to properly understand. I have no interest in writing a book on cryptography; there are already many good texts (ref: Applied Cryptography, Cryptography: Theory and Practice). Most modern secure systems are not easily attacked by cryptographic vulnerability, barring inept software implementations. As long as reasonably secure algorithms are chosen and proper public key cryptography practices are used, a secure system is far more likely to be broken by implementation and architecture weaknesses than from cryptanalytic assaults. Therefore, I spend the majority of my effort focusing on the very practical issues of proper architecture, implementation, and administration.

Local Security

Local security doesn't matter! Only remotely exploitable problems matter!

Often a neglected topic, the security of local userspace is as important as the security of daemon programs.

The most elementary way in which local security may be compromised is by exploitation of suid and sgid programs. These are programs that will run with the rights of a particular user regardless of the actual user that executed the program. The most notorious and potentially harmful situation involves setuid root programs; programs that will always run with the power of the root account. Setuid root programs are sadly overused. They are generally not necessary and are often used when programs with setgid functionality would be sufficient.

A simple but common example would be X11 terminal emulators such as Xterm and rxvt. These programs are often made suid root so that they may update the utmp and wtmp files in the /var/log directory, but such power is not necessary. It is better to create a new 'utmp' group and allow the utmp group write access to wtmp and utmp. Then remove the setuid root from the terminal emulators and make them setgid utmp. The same functionality will be present, but at much less risk; a user that manages to abuse the setgid utmp binaries will only gain the power to manipulate the umtp and wtmp log files.

If it were possible to do so, it would enhance the security of a system if it were configured with no suid root binaries at all. This sort of setup is feasible on systems that are not multiuser and where console administration is possible and convenient. In other instances, it is generally a better idea to settle for having as few setuid root programs as is possible or convenient. In most circumstances, it is probably necessary to leave "su" as suid root. As long as su requires a user to be in the 'wheel' (or similar) group in order to gain root powers, it tends to be of marginal risk and actually offers enhanced security when compared to allowing users to directly login as root via ssh. In multiuser systems, passwd is probably also a good program to leave as a suid, as it is desirable for users to be able to change their own passwords. Most of the time, other suids are unnecessary. Chsh, chfn, etc are conveinent for users, but not necessary. Ping and traceroute are often suid root, but usually it isn't necessary for regular users to be able to use them. Follow the rule that if a suid is not absolutely essential to the functioning of the system, it's probably a good idea to get rid of it. Finding suid programs is simple; just use find:

find / -type f -perm +6000 -exec ls -laF {} \;

This command will list every suid and sgid program on your system. Make it your goal to make this list as short as possible, paying particular attention to suid root programs.

Proper use of POSIX capabilities can help diminish the powers that an attacker could gain by successfully exploiting a suid program, however POSIX capabilities are not designed as a substitute for a properly designed mandatory access control scheme -- capabilites lack the fine-grained control that MAC systems provide. However, in many cases, daemons require only subsets of root's power. Perhaps one needs a dhcp client to be able to bind to low ports, reconfigure network interfaces, and open raw sockets. Without capabilities, such a program would be required to run with full root powers. With capablities, these individual powers alone can be assigned to the daemon. Therefore, should the daemon be compromised, an attacker will not gain full control over a system in one step; the attacker will merely gain the capabilities provided to the daemon. Such a scheme helps to mitigate risk and should be used extensively.

Unfortunately, capabilities have not seen the use in userspace programs that the functionality deserves. It is often necessary to modify existing daemons so that they take advantage of Linux capabilities. Sometimes, patches to add capability support already exist; these patches are quite useful, but should be audited for potential security impact or for trojan code. Otherwise, using capabilities may require a seperate su-like program that is capable of forking, dropping all but the desired capabilities, and executing the desired task. Several programs exist that can fill this role, and it is not hard to implement such a program, assuming that one has basic programming skills. In many cases, however, it is simply easier to modify daemons so that they use capabilities directly.

Filesystem and mount flags can be used to enhance overall security and aid in the fight against suids. It is a good idea to take advantage of the "nosuid" and "nodev" mount flags wherever possible. The "noexec" flag may also be of value. The uncommonly used immutable file flag "+m" can be useful for protecting critical files that do not change often. Logs can be set as append-only "+a" in order to make them more difficult to delete.

tmp exploits can be mitigated by having users each use their own tmp directories to which only they have read/write access. This approach will not eliminate the risk of tmp-related security problems, but it will make them far less of a threat. Ideally, programs should be written so that they do not rely upon tmp files, but this is perhaps not always feasible. If possible, global tmp storage should be assigned to a ram-based block device, such as tmpfs on Linux. Such a filesystem not only provides performance benefits, but also ensures that the tmp files will be destroyed on system restarts. Note, however, that if a system takes advantage of secondary, persistent storage for swap, VM-backed temporary filesystems such as tmpfs do not necessarily guarantee protection against an attacker reading deleted temporary files -- if the pages that back the tmpfs filesystem are swapped to disk by the VM, then the security issues are nearly identical to those encountered with a traditional disk-based temporary filesystem. If one wishes to mitigate this risk, it is advised to either use encrypted swap, disable disk swapping entirely, or to use a ram-based filesystem that is not VM-backed and cannot be swapped to disk.

Minimizing the number of suid and sgid programs will greatly enhance your system's local security, but as always, there are far more threats lurking in the wild that should be addressed. On a Linux system, there exists a functionality called "LD_PRELOAD" that allows one to override the default library dependencies for a binary, allowing a user to substitute customized library routines in place of the defaults. This function is usually benign, but can be used for malicious purposes. The best method of protection against this type of attack is statically linking security-critical binaries.

Note that it is probably not the best idea to statically link everything as a matter of policy. Statically linked files must be entirely rebuilt if a library dependency is later found to have an exploitable security problem. If many files on a system depend on such a library, the work involved in completely replacing all dependent statically linked executables may be immense. Additionally, on many systems that implement address randomization, statically linked executables contain constant references to function and data addresses, impairing full randomization of an executable's address space. Such problems can be worked around by converting a system to use ET_DYN executable objects, but most distributions do not yet integrate such functionality into the base system. Such changes are not trivial -- compiler, linker, loader, and libc modifications must be made to fully support ET_DYN executables. Dynamically linked libraries are necessarily position-independent and can easily be randomized on existing systems. Note, however, that without modification, the addresses in binaries that link to dynamic libraries will not be position independent on most existing systems: only the dependent library addresses will be randomized unless the executable is ET_DYN.

Unix permissions are usually the backbone of a Unix machine's security. Effort should be taken to make certain that permissions are used correctly on a system. With clever use of groups, it is possible to greatly minimize damage an attacker could do by a single means of attack.

If the machine needs to be extremely secure, it may be a good idea to disable loadable kernel module support. Loadable modules are an excellent convenience, and can sometimes be used to fix kernel security problems in place without a reboot, so this decision should not be made lightly. However, kernel modules can also be used to create extremely transparent rootkits that may be difficult to detect. They can also be used to compromise system behavior in very subtle ways that may lead to further problems. Therefore, if it's feasible to do so, it's probably a good idea to disable module support in your kernel altogether. Note that loadable modules are not the only means by which kernel code may be altered from userspace -- the /dev/mem and /dev/kmem devices provide ready opportunities for a root-empowered attacker to alter kernel code, even without module support. Such attacks may be thwarted by use of the capability bounding set on Linux systems, or alternatively by patching the kernel to disable writes to /dev/mem and /dev/kmem. Also keep in mind that buggy device drivers may also introduce potential vulnerabilities that may allow skilled attackers to modify kernel code, even without access to modules or memory devices. Always audit code!

It is not a bad idea to avoid the use of PAM on secure machines. Many implementation-related exploits in the past have been contingent upon the implementation taking advantage of PAM features. Note that PAM may be of real use in some circumstances and it not always worth removing.

Similarly, it's a good idea to use some sort of intrusion detection system to detect modified local system files. Such a system can potentially warn about rootkits or even accidental filesystem damage that affects critical files. Tripwire is probably the most commonly used program of this type. If possible, mount system filesystems are read-only to provide additional annoyance against intruders.

It may be a good idea to entirely disable the ptrace() syscall on extremely security sensitive machines. Linux's ptrace() syscall has had security problems several times in the past and is generally not needed on machines where no users will be debugging programs. It may also be a good idea to disable the ioperm() syscall if it is not being used. The most notable user of ioperm() is the X11 server. In an extremely secure system, kernel security problems are probably the greatest worry.

Local and remote security can both be greatly enhanced by taking advantage of nonexecutable/randomized stack and heap implementations. Although these approaches do not guarantee system security in any way, they make a very common type of program flaw, the buffer overflow, much more difficult for an attacker to successfully exploit. On non-x86 and non-ppc machines, the cost of a nonexecutable stack and heap is essentially free. Because of shortcomings with the x86 and ppc architecture, a small performance penalty is paid for this feature, but on security-dependent machines, it is a worthwhile cost. PaX and execshield are common implementations of this scheme for Linux. Modern versions of OpenBSD have a PaX-derived protection by default.

Another means of diminishing the risk of successful exploitation of buffer overflows is to use a compiler that can enforce bounds checking on operations that are suceptible to buffer overruns. Stackguard and Propolice are implementations of this approach that modify gcc to enforce bounds checking. The performance penalty is not significant and the security gains are worthwhile. OpenBSD and Adamantix both use this approach.

Finally, if security is paramount, it may be a good idea to use a more sophisticated security model than Unix traditionally provides. Role-based manadatory access control security schemes are increasing in popularity and availiability on free Unix systems. However, these schemes require quite a bit of administrator expertise in order to provide actual security improvements. They also tend to have quite a bit of a learning curve to them. No security benefits will be afforded if the administrator does not know how to properly set up these schemes. RSBAC and SELinux are probably the most well-known MAC systems for Linux. GRsecurity provides a more basic MAC scheme as well. OpenBSD can take advantage of systrace to provide a somewhat similar, but less sophisticated functionality. Of these solutions, RSBAC is the most powerful. Mandatory access control is a complex topic that deserves its own treatment, so I will not cover it in detail here.

more... chroots, concepts/breaking/value; jails wrt chroots.

It is probably also a good idea to give a quick introduction to modern von Neumann based machines, including memory layout, stack usage, hardware memory protection mechanisms, and some idea of how modern UNIX kernels provide the protections that programmers expect. It is difficult to understand the exploit mechanisms without knowing how the underlying hardware functions, as most security cracks require a detailed knowledge of the underlying system to properly appreciate.

Remote Security

Proper remote security is dependent on having good daemon implementations and intelligent use of cryptography. Both are necessary to ensure security. Although it was probably not true even a few years in the past, currently daemon implementation quality is the hardest problem to overcome when architecting a secure system.

Many systems need remote administration, and since administrative functions inherently rely upon selectively granting elevated privileges to remote users, it is worth spending a good deal of effort making sure that any chosen remote login method is secure. Fortunately, a great deal of effort has been invested in this direction, and there exist several protocols and implementations that address this problem. The two most commonly encountered methods of remote login in UNIX environments are telnet and SSH. Modern networks should never require the use of telnet, but it is unfortunately still in fairly common use. SSH is the secure successor to telnet that provides a superset of telnet's functionality. SSH is very widely, but not yet universally, used as the primary remote login mechanism for UNIX machines.

Non-kerberized telnet should NEVER be used on any network. Kerberized telnet that uses Kerberos v4 and prior should be considered as insecure as well, since Kerberos v4 and prior are inherently insecure due to protocol design flaws. Telnet works entirely in plaintext and has no authentication methods at all. There is no excuse to use telnet in a modern computer network. Virtually all devices have sufficient computational power to efficiently encrypt interactive CLI traffic. Even leaving telnet available on a network opens potential security problems, since users may inadvertently compromise encrypted sessions. If a user telnets into a machine, then ssh'es out from that machine, their ssh data will be clearly sniffable because their ssh traffic will be first directed over a plaintext telnet session. Knowing that users usually do not have the necessary security knowledge to protect themselves, it follows that the only way to assure that ssh provides security benefit is to completely eliminate telnet from a network. I cannot emphasize this point enough. Remember always that no system is any more secure than its weakest potential link.

SSH is telnet's secure successor. It is a mature solution that has well-regarded free implentations such as OpenSSH and lsh. Note however that both implementations have suffered from security problems in the past. I know of no current SSH implementation that is both widely used and has suffered from no exploitable problems. SSH should always be used for remotely logging in to UNIX machines, but simply using ssh as if it were a secure telnet replacement with no further knowledge is not a good idea. The ignorant use of ssh can lead to security compromise by a man-in-the-middle attack. MitM attacks should be well-known to anyone who has studied public-key cryptography; they attack the security of public-key cryptosystems by attempting to take advantage improperly authenticated keys.

In a MitM attack, the attacker (Mallory) inserts himself between the two parties that are communicating (Alice and Bob). Alice and Bob have not exchanged their public keys yet, so they unwisely decide to exchange keys over the communications channel that they are using. Mallory, being between Alice and Bob, can take advantage of the situation: he can intercept the key exchange in both directions. Now that Mallory has their public keys, Mallory substitutes the keys that they sent each other for keys that Mallory has generated himself. Alice and Bob have taken no measures to authenticate their keys, so they do not notice the exchange. When they use these substituted keys to communicate, Mallory again acts as a proxy between them, decrypting their communications and then reencrypting them with the original keys that he intercepted. Since Alice and Bob have taken no means of authenticating the identities of the keys that they have exchanged, they cannot detect Mallory's presence, and so they communicate in ignorance over the compromised communications channel, falsely thinking that their transactions are secure.

Man in the Middle attack:

   Key exchange must proceed the corresponding communication, but both
   sides of the key exchange and communication may (and usually do in
   practice) occur asynchronously.

   key exchange:

           real key K_A           fake key F_A  
   Alice   ------>       Mallory  ------>    Bob
                 M keeps the real key

           fake key F_B           real key K_B
   Alice   <------       Mallory  <------    Bob
                 M keeps the real key

   subverted communications:

     encrypted with F_B      encrypted with K_B
   Alice  ------>   Mallory    ------->    Bob
         Mallory has private key paired to F_B

     encrypted with K_A      encrypted with F_A
   Alice  <------   Mallory    <-------    Bob
         Mallory has private key paired to F_A

It should be no surprise that ssh should have a mechanism to help protect the user against these attacks. The chosen mechanism that ssh employs is the key fingerprint. Whenever a ssh client connects to a server, it performs a cryptographic hash of the public key (the fingerprint) that it has received from the server. ssh then checks to see if it has seen this hash from the server before. If it has, then it considers the key to be valid and continues to connect. If it has no record of any hash from this server, it displays the fingerprint to the user and asks the user if it should continue connecting. If ssh has seen a different value for the hash in the past, it warns the user that the connection may potentially be insecure and asks the user if it should continue connecting to the server.

In practice, many users (and even admins) will simply ignore the warning that ssh issues when the key fingerprint does not match that of past connections, allowing MitMs attacks to occur as easily as if there were no authentication at all. The initial connection between a client and server inherently has no security against MitM attacks unless key fingerprints are checked via a trusted channel (perhaps a phone call, a meeting in person, or a sealed message carried by a trusted courier).

For whatever reason, many people think that MitM attacks are somehow too involved or difficult to be a real-world threat. This is not the case -- there exist tools such as Dug Song's sshmitm and webmitm that will rather trivially allow a MitM attack to be performed on even a switched network (consult the data channel section for more information on potential attacks that may be performed against switches). MitM attacks are not exotic things that can be assumed to be a negligble threat: they are real.

Network scanning is one of the most fundamental tools in security, for both those attempting to defend against attack and for those attempting to attack systems. The value of a network scanner to an attacker is obvious -- it allows an attacker to know what a target system is running, giving the attacker information by which he can plan a method of attack. For those attempting to prevent compromise, a scanner is of equal use, since it allows one to immediately see potential vulnerabilities in one's own systems. Additionally, scanning is worthwhile for both research and diagnostics.

With so many uses for network scanning, it is perhaps of interest to discuss the particulars of scanning approaches. There are two methods to scanning networks -- the active approach and the passive approach. The difference is quite simple, and completely analogous to radar or sonar systems. Active scanners send out packets and await replies, attempting to infer information from the pattern and content of the target's replies. Passive scanners do not send packets; they merely wait and gather information from packets that are received in the normal course of a machine's operation. Note that passive scanners can be used in an active mode -- one may run a passive scanner and perform regular traffic that would not normally be noticed as a scanning attempt (perhaps one connects to a web server, as one normally would when not scanning a machine).

Until recently, almost all quality network scanning utilities took the active approach to network scanning. nmap is by far the best known active scanner, for good reason. nmap integrates nearly every method of active scanning publically known. It has many scanning techniques, and can gather a great deal of information about a target host, including open ports, system uptimes, OS type, and even the versions of running server daemons. nmap is capable of stealth scanning batches of hosts, and even of proxying through poorly configured ftp daemons. Despite its power, nmap is not difficult to use for novice users, and it is highly reccomended that one get in the practice of scanning his own machines on a regular basis.

The other school of scanning thought, passive scanning, is best represented by another, more recently developed tool, p0f. p0f is an extremely capable passive scanner that is able to run in three modes. p0f defaults to listening for iSYN TCP packets (iSYN is a notation I've invented for my own purposes: it merely means incoming SYN packets, as opposed to oSYN outgoing SYN packets). When a host attempts to open a TCP connection to another machine, it first sends a SYN packet, and awaits a reply. The remote machine may then respond with an ACK packet if it wishes to proceed with a TCP connection, or it may send a RST packet if it wishes to deny the connection attempt. If the original host receives an ACK to its SYN request, it sends another SYN packet to the remote machine before the connection is considered to be "open" for both machines. This scenario is the oft-referenced "three-way handshake" of TCP, and it is fundamental to the operation of passive scanners.

In iSYN mode, p0f carefully examines incoming SYN packets for information that may be used to determine the remote host's operating system. iSYN mode is suitable for gathering information about the hosts that attempt to connect to a local machine. p0f's second mode of operation, oSYN/iACK mode works from the other direction. In this mode, p0f observes outgoing connection attempts from the local machine and examines iACK responses from the remote machine. p0f's third mode is similar to the second mode, but instead deals with oSYN/iRST situations, allowing p0f to effectively gather OS information about remote hosts that have no open ports.

At first examiniation, it may seem that passive scanning offers little advantage when compared to active scanning, but such a conclusion would be incorrect. Passive scanning is entirely undetectable by the remote host -- the interface on the local host need not even run in promiscuous mode, since it is merely behaving like usual and examining only packets addressed to the local host. Further, passive scanning has one very powerful advantage over active scanning: in iSYN mode, it is able to gather information about hosts behind a NAT or firewall. By analysis of TTL values from a host, it is sometimes possible to reconstruct information about network topology behind a NAT device. It is nearly always possible to detect the operating system type of hosts behind a NAT unless the NAT device "normalizes" outgoing packets by mangling TCP options so that all packets have a consistent signature regardless of origin. Most NATs only minimally modify NATted packets, so analysis is generally accurate and gathers a surprising amount of information. Finally, active scanners are useless on hosts that have all ports closed. Passive scanning attempts may still gather information in iRST mode so long as the remote host does not simply drop incoming traffic without a RST response (something that I would not reccomend -- not sending an RST could potentially slow inept scanning attempts that work in serial, but dropping SYNs without a RST violates RFC and also gives away the presence of a firewall, perhaps attracting more attention from skilled attackers). Best of all, passive scanners, unlike active scanning, do not attract undue attention from targets.

Give a quick overview of UDP, TCP, and IP. Note some of the inherent problems involved with these protocols. Point readers at the RFCs for more information.

Border Security/Firewalls

I installed a firewall. Now I'm secure against anything -- right?!!

Border security, while important, is often given too much emphasis by security neophytes and the media. Firewalls should best be regarded as having a purpose entirely analogous to the conventional tumbler-based mechanical locks that are nearly ubiquitous on doors in the physical world. They provide protection against a casual attacker, but not a determined or skilled one. Firewalls are valuable tools that provide a first line of defense, but they are not a panacea.

Any border that allows any ingress or egress opens channels of possible attack. Although not commonly regarded as being a serious threat, exploitable client vulnerabilities that are commonly found in web browsers and FTP clients provide an almost universal vector by which a client may be subverted to be under attacker control. The subverted client may then be used to attack other machines on the supposedly secure network, thus bypassing the protections that a firewall would offer altogether. Worst of all, client software is subject to much less scrutiny than server software, as its security is judged to be of far lesser importance. This means that it is almost certain that there are many attackable bugs in client apps, and that most of these bugs will not be widely known. It is therefore best to assume that nearly any client application may be exploited by a determined attacker. Also remember that clients in many settings will allow end-users to install software; in this situation integrity of clients may not be trusted at all.

The best solution to this problem is to divide any network into no less than three partitions. One would be the the traditional untrusted outside network. Another would be the untrusted client network. The last would be the somewhat trusted secured server network. It is of course possible to apply this scheme to networks with multiple untrusted outside or client networks, or even multiple server networks; the critical idea is that a network should be thought of as being no more secure than the least secure machine within that network.

It is best that border devices serve only the role of border protection. Ideally, they should function in a bridging role, so that they may not be accessed through the network that they protect at all. If remote administration is necessary, it would be best to allow it only through a secured network entirely seperate from the networks that the firewall partitions. Ideally, there would be no extant path from the firewalled networks to the firewall administration network. Existence of any such path would largely defeat the purpose of a bridge-configured firewall, since once the path was found, it could be attacked in the same way as a routable firewall.

Remote IDS systems are a thorny topic when applied to borders. Assuming that the border device is ideally secure, a remote IDS would only have the possibility of reducing security, since remote IDS programs typically require privileged access and by nature must parse untrusted data. Unfortunately, I know of no existing free IDS programs where network data is parsed in a seperate, unprivileged process distinct from the privileged network data gathering process. The benefit that a remote IDS provides is that it is possible to watch attempted attacks. It is perhaps possible that by watching attack attempts, one could see an attack that may have the possibility of success, and by human intervention, prevent it. However, this situation is unlikely to happen in practice.

Remote IDS, by nature, can only detect attacks that are already known with perfect accuracy. Assuming a machine is competently administered, all known attacks against the machine will fail. Therefore, only unknown attacks would have any measure of success against such a machine. In that case, the remote IDS may at best only suggest "suspicious" traffic that a skilled person would have to manually read in order to find potential attacks. This approach has at least three problems. First, since the remote IDS will have to report a very broad class of "suspicious" activity, the amount of material that must be human-parsed and analyzed will be prohibitively large. Second, it is entirely possible that "unsuspicious" traffic may contain an attack vector. In that case, the remote IDS will provide no protection. Thirdly, even if a human were able to parse all the "suspicious" traffic, and if there were no attacks possible in "unsuspicious" traffic, it is unlikely that a human would be able to accurately predict what traffic may result in a successful attack without testing (or analysis, but testing is generally less time consuming than analysis). These problems, coupled with the existing problem that a remote IDS itself provides an additional potential attack vector, strongly suggest that a remote IDS is of at best questionable value.

If a remote IDS is to be used at all, it should be restricted to low traffic internal interfaces, and any machine performing IDS should be untrusted and should perform no other role. IDS results should not be locally logged on the machine running the IDS daemons, but should instead be logged to a remote server. Firewalls should never run IDS themselves; instead they should mirror traffic to the IDS machine and should reject any connection attempts from the IDS machine.

Remote passive fingerprinting tools such as p0f may provide a valuable alternative to full-blown remote IDS on some systems. p0f is capable of running unprivileged within a chroot, and does not attempt to parse and decode high-layer protocol data, making it far less of a security risk than most traditional IDS. Even if one does not plan to use p0f as an IDS mechanism, it is certainly worth investigating simply because it illustrates the amount of data that may be collected, even from a "secure" system by a skilled attack.

Data Channel/Communications Security

Switches, routers, hubs, wireless, phones, ISDN, and all the other hardware that shouldn't be trusted.

Outline simple function of switches and hubs. Note that switches have a finite MAC hash table used for switching between ports. If this table size is exceeded, most (all?) switches will fall back into a hub mode of operation, invalidating the widely believed resistance of a switch to normal packet sniffing. Cite tools that use this approach.

Detail the risks involved when trusting commercial hardware solutions, particularly for routing.

Quickly outline the difference between cable and DSL connections, overview DOSCIS. Compare to leased line, frame relay, and ISDN connections.

Secure Tunnelling/Traversal of Insecure Networks (VPN)

IPsec, CIPE, PPP in SSH, PPP in SSL, and our other cryptographically enhanced low layer friends.

CIPE is known to have systematic flaws in its implementation that make it unsuitable for secure hosts. expand...

PPP in SSH is acceptable from a standpoint of security, at least for small-scale use. Unfortunately, there exists a fundamental problem in reliable operation for any application in which TCP is tunnelled over a TCP connection. The TCP windowing mechanism relies on feedback in order to properly operate. expand...

It is probably possible to embed PPP over a SSL-encrypted TCP channel; however, it should be obvious that any such implementation, being TCP over TCP, would suffer from the same flaw as PPP over SSH.

IPsec should be considered the preferred approach to secure, encrypted tunnels. It offers security, interoperability, and the ability to function both in a endpoint-to-endpoint role and as a transparent VPN. Importantly, IPsec also provides a secure authentication mechanism as well as a cryptographic transport layer, allowing for truly secure configurations.

Wireless Security

New topic, newness makes it deserve its own section.

IPsec provides the best approach to providing wireless security.

Detail the uselessness of WEP, the fundamental flaw in bridged wireless access point / switch combinations, as well as the proper topology for a secure wireless network.

Physical Security/Radiative Security

These attacks aren't as difficult or as unlikely as seems to be commonly thought.

Provide a basic introduction to circuit theory and electromagnetic far fields. Don't introduce the math; it will confuse your readers. Outline basic approaches to signal interception.

It is futile to attempt to secure any deterministic system once it has fallen into the physical control of an attacker. A competent attacker in control of hardware will be able to easedrop and alter all signals in the host hardware, allowing the attacker to falsify assumptions made in secure software design that are axiomatic to system security. Probably the only reasonable approach for preventing physical attacks on a system is to build a system so that it will destroy itself if it detects an attempt at tampering. However, even such an approach should not be entirely trusted, since a sufficiently skilled attacker may be able to bypass any sort of antitampering mechanism.

Note that quantum systems if they ever come into wide use, allow one to inherently detect easedropping by outside observers, thanks to the unanticipated observation causing a wavefunction collapse of the quantum system that will provide an outcome that is not congruent to the one expected for the quantum communication channel. Unfortunately such systems are difficult to manufacture and even more difficult to deploy over long distances over conventional communications infrastructures. (cite paper in which quantum communication path was used).

Even a provably correct system running on attacker-controlled hardware can be subverted, often by simply introducting random hardware error. Cite paper in which JVM is subverted to execute malicious instructions by subjecting host hardware memory core to excess heat, provoking data corruption.

Hardware keylogging devices are a threat that are almost never addressed. There are two basic forms of hardware keylogging. Both approaches rely on physical access to the system keyboard. The most common commercially available keyloggers store a finite number of keystrokes in the keylogging device itself. These keyloggers are generally unintelligent and lack any sort of matching heuristic to find 'interesting' data. Because of this flaw, on heavily used machines, it would be necessary for the keylogging device to be frequently checked, which would require frequent physical access to the premises of the keyboard. It is quite likely that more intelligent keyloggers of this type exist which are able to trigger keylogging only when "interesting" matches are made in the keystroke stream, so one should not assume that all local-storage keyloggers have this limitation.

The second general class of keylogger tries to avoid the shortcomings of local data storage by transmitting keystrokes to a remote location, perhaps by radio or microwave signals. Such devices, if they transmit in the radio or microwave frequencies can be detected as one would a traditional "bug" device, making them easier to detect with proper hardware. To a degree such signals may be masked by background noise on similar frequencies; clever design may allow for low signal strengths that may not be easily detected as being obviously out of place. For this reason, if one is suspicious of radiofrequency bugging devices, one should establish a baseline of a situation in which it is known that no such easedropping device is present by which one may make comparisons.

Note that it is perhaps possible to avoid easy detection by using line of sight communication methods; such methods should be thwarted by conventional physical examination. Note that one should be careful that a signal is not sent to a transmission device in the system chassis itself by carrying a signal over the keyboard wired interface itself. A clever attacker would use this approach to defeat naive physical examination methods. Also note that keyboards inherently produce a measurable signal when in use. Longer wires operating with higher currents will produce stronger signals that are more easily easedropped. It is perhaps possible that one could detect keyboard use by tapping a long keyboard wire run with a surrounding loop of wire (quickly explain Ampere's law). Such an approach could trivially detect whether the keyboard is in use, and would also easily provide information that would indicate the period of time between keystrokes. Such information would perhaps allow one to use statistical analysis methods to determine data if enough of a dataset is collected on a particular user's typing habits. However, such sophisticated approaches are probably rarely seen in practice, since it is easy enough to simply insert a listening device in the path between the keyboard and host system, or indeed to insert such a device inside a keyboard itself.

Modem lights, CRT/LCD emissions, faraday shields, bluetooth, wireless keyboards and mice...

media destruction and forensics, media type longevity

Secure Coding Practices

The binary faces behind your favorite exploits!

This section will of necessity be largely C-centric, since C is the most popular programming language on UNIX machines. Note, however, that most of the issues here can be considered to be solved problems if one is willing to use a more sophisticated language than C. I would highly suggest the ML variants (particularly ocaml), although many other languages fix various flaws in the C language. It is, however, possible to write secure code in C; it is merely a more challenging task for imperfect human minds.

I will not detail how one constructs shell code for exploit payloads in any great detail, nor shall I detail cracking and reverse engineering methods, as they are not germane to the discussion and vary greatly by the hardware architecture involved.

buffer overruns, format string attacks, signal handling problems, integer overflows, input validation problems, exploitation methods (stack smash, heap corruption, return-to-libc), lots more

Outline the basic principles in writing a secure program, and also in auditing for security problems.

Nicholas J. Kain  | n i c h o l a s | a t | k a i n | d o t | u s |