Putting a Group of Processes into a CPU and Memory Jail -- First Steps with Control Groups in Linux

Control groups are really, really great. They can effectively make your system act as if it were two or more systems in one. It's like virtualization without all the overhead and much more efficient than e.g. renicing. (Just a little overhead... and no hard cpu limit yet, but still great!) You can lock a process into a jail that is only as fast as e.g. 10% of your cpu and has only 10 % of your memory and 10 % of your disk speed. Pretty much whatever it does, it won't be able to really annoy you. This means that e.g. big compiles in the backgroud *really* don't affect your browsing *at all*. Here are some presentation slides about cgroups. So let's see how we take the first steps to get there. I will show you how to do it manually step by step so you can learn how it works.

0. Prerequisites
Cgroups were first merged into Linux with kernel version 2.6.24. Use cat /proc/cgroups to find out what (parts of) cgroups your system supports. If there's no such file, you probably don't have support for cgroups on your system.

1. Mount cgroups
You need a dir cgroup and then mount the special filesystem there:
mkdir -p cgroup
mount -t cgroup none -o cpu,memory cgroup

2. Creating a new group
cd cgroup
mkdir jail
cd jail

3. Adding PIDs
Adding tasks is as easy as
echo $PID >> tasks
Be aware that the children of that PID (any process started by it, e.g. by that bash shell) will automatically also belong to the same group. Using 0 as PID includes the PID of your current shell into the group.

4. Making CPU Restrictions
This is very easy. If you want a program to be able to use only a maximum of 10% of your cpu, echo 102 > cpu.shares
Because 100 % CPU is devided into 1024 shares. Note that if there are left over cpu shares from other processes, cgroups are benevolent and give them to the poor so to speak: Unless the CPU is fully used and it would affect processes/groups, a process/group can use as much CPU as it wants. But that includes the system time of the process, so the time shown in top does hardly ever equal the cpu shares assigned.A hard cpu limit for cgroups is not yet available in mainline.

5. Making Memory Restrictions
Making memory restrictions works just the same way.
echo 200M > memory.limit_in_bytes
to restrict the group to a 200 MB hard limit. Otherwise use memory.soft_limit_in_bytes.

6. Removing Tasks
Removing a task works be putting it into another group. If you want it in no group, put it into the root group. That's the group directly in the cgroup/ directory. Note that unmounting the cgroup does not seem to change anything at all. It all remains in place, you just won't be able to change anything.

7. Try it out
Now save all your data and try this out with some huge memory and or cpu hog. Once you've got it working, try converting it into a cgconfig.conf and cgrules.conf to get it working automatically at system start. Or just write a bash script. If you're hungry for more, the cgroup kernel docs can help.

8. Grepping Process
This command helps to you get pids ready to be pasted into the tasks file:
ps aux | /bin/grep "$@" | /bin/grep -v grep | gawk '{print $2;}'
Put that into a script and then do e.g. grep_pid make.

X. Problems?
If you run into problems also check this post about common issues with control groups.

If you like this post, share it and subscribe to the RSS feed so you don't miss the next one. In any case, check the related posts section below. (Because maybe I'm just having a really bad day and normally I write much more interesting articles about theses subjects! Or maybe you'll only understand what I meant here once you've read all my other posts on the topic. ;) )


  1. Informative, thanks you :)

  2. nice one.. Thank you!! :-)

  3. Don't forget to subscribe. ^^

  4. Great tip, thanks for posting it!

    I'll give it a try for my 'svn update' at work, which is killing my system performance for half an hour due to the size of our repo.

  5. Cool, let me know! You may also want to try around with -o ...,blkio then, as that's very heavy IO.

    But in any case do let me know how it works. In my experience this has been extremely nice. I've used this on a compile again recently, and somehow the desktop latency pretty much doesn't seem affected at all. Very, very nice stuff.

    Also in my experience so far, kernel 3.0 seems more responsive with heavy disk IO.

  6. Instead of that long command in 8, one could easily use 'pidof' to find the pid of a process.


I appreciate comments. Feel free to write anything you wish. Selected comments and questions will be published.