[ Main / Projects / Docs / Files / FAQ / Links ]

Introduction

Older x86 machines have chipset limitations where only a certain amount of system RAM may be cached; because of this flaw, adding more memory to such a machine may actually harm performance. However, it is possible to set up the Linux kernel to use such uncached memory as swap, removing the performance penalty for system RAM access and giving the uncacheable RAM a use.

slram relies on the memory technology device layer of the kernel. "Memory Technology Device (MTD) support", "Caching block device access to MTD devices", and "Uncached system RAM" must be enabled in your kernel configuration. slram works fine statically built or as a kernel module. If you happen to like to edit your kernel configure files directly, the corresponding CONFIG defines are "CONFIG_MTD", "CONFIG_MTD_BLOCK", and "CONFIG_MTD_SLRAM". Once the necessary kernel support has been enabled, compiled, and installed, it is now time to set up the system to actually use the new slram driver.

slram as a module

Most users will probably choose to use slram as a loadable kernel module. The module requires a bit of configuration to use properly. First, the kernel boot parameters must be altered. For a system that can only cache 64MB of memory that contains 92MB of RAM, the appropriate kernel boot parameter would be:

kernel /boot/vmlinuz root=/dev/hda1 mem=exactmatch mem=640k@0 mem=63M@1M rw

lilo users will have to alter their append directive.

Next, /etc/modules.conf must be altered so that the modules are inserted in the correct order and so that the slram module is passed the correct arguments. The following lines would need to be added for the example configuration:

options slram map=mtdblock0,64M,+32M
above slram mtdblock

If devfs is enabled, the /dev hierarchy does not require any changes. However, if a static /dev tree is in use, it may be necessary to add the correct mtdblock0 device. If the /dev/mtdblock0 device does not exist, add it by invoking the following command as root:

# mknod /dev/mtdblock0 b 31 0

Now all that remains is to load the slram module and to set up the swap space on the mtdblock0 device. Execute the following commands as root:

# modprobe slram
# mkswap /dev/mtdblock0
# swapon -p 10 /dev/mtdblock0

Check to see that the swap space is in use by examining /proc/swaps:

$ cat /proc/swaps

In my case, /proc/swaps looks like:

Filename                        Type            Size    Used
Priority
/dev/hda1                       partition       60444   0       -1
/dev/mtdblock0                  partition       32760   436     10

32MB of uncached system RAM is being used as high priority swap. An approximately 59MB swap partition is being used as low priority swap.

At this point, you are finished. All that is necessary is to add the final three commands to your initialization scripts so that the uncached memory is automatically set up as swap at boot. Since init scripts vary greatly from system to system, you will need to consult another source specific to your distribution if you are unsure of how to make the necessary modifications.

slram as a static driver

Setting up a statically compiled slram driver is almost the same process as setting up a modular slram. Only the first two steps differ; the rest of the process is exactly the same as in the modular configuration.

The kernel boot parameter should be modified to be similar to:

kernel /boot/vmlinuz root=/dev/hda1 mem=exactmatch mem=640k@0 \
                     mem=63M@1M slram=mtdblock0,64M,+32M rw

The step that changes modules.conf may be ignored; we have already provided the information that slram needs in the kernel boot arguments. The rest of the configuration is identical to the modular setup.

Nicholas J. Kain  | n i c h o l a s | a t | k a i n | d o t | u s |