Last.FM: Recently listened

Saturday, May 13, 2006

[L4] L4Linux performance issues

L4Linux is a paravirtualized version of the Linux kernel running on top of the L4 microkernel family. For my dipoma thesis I am doing some evaluation of L4Linux in comparison to native Linux to find out where the former's performance problems come from.

One of the problems is that Linux applications are running as L4 user space apps alongside with the kernel. Every system call issued by an application leads to an IPC to the Linux server which then answers back once again using IPC. This means, that 2 context switches are invoked for each system call, resulting in a much larger number of TLB and cache misses than on native Linux.

I measured this by performing a nearly-null system cal: sys_getpid(). From Linux user space its average execution time is around 260 cycles on my test computer (An AMD Duron 800 MHz with 256 MB RAM, Linux 2.6.16 booted from a ramdisk). Performing the same task on the same computer with the same ramdisk setup in L4Linux results in around 3,700 cycles for each call to sys_getpid().

I thereafter counted cache and TLB misses for each setup and learned, that for 100,000 calls to sys_getpid() there were around 200 TLB misses in native Linux - probably from the point where my benchmark was interrupted by some other app. On L4Linux there were 6 TLB misses for each system call - some more. These misses lead to delay in execution of L4Linux system calls, because cached need to be refilled.

However, this is not the one and only source of performance leaks. Losing 3,500 cycles for a system call is not a lot, when you see that a blocking sys_read() needs 2,7 million cycles in average. There are other sources, for instance the points where L4Linux needs to use L4 system services to get work done. I will discuss this in another post soon.

Conclusion: Context switches for system calls reduce L4Linux performance.

Solutions already have been proposed:
  • Processors with tagged TLBs do not need to flush their TLBs on context switches and therefore will reduce TLB misses for system calls.
  • Cache coloring can be used to reduce overlapping between L4Linux and its applications so that both do not thrash their caches while running in parallel.
  • Small address spaces are a concept to run multiple applications inside the same virtual address space. Thereby no context switching is performed for Linux system calls.

3 comments:

Anonymous said...

was sind denn die small address spaces? bekommt da ein prozess nicht mehr den vollen 32 bit adressrraum, so dass dieser auf mehrere prozesse aufgeteilt wird oder was ?

Bjoern said...

Exactly. You can find Jochen Liedtke's paper on this topic here

Anonymous said...

The research at TU dresden seems to be quite interesting. I see that there is quite some push towards microkernels and it seems to be backed by some solid research.

Nonetheless, I have a query regarding exporting system calls directly so that one can access them inside such things as driver code. By doing an EXPORT_SYMBOL on say sys_getpid, I should be able to access it from withing the kernel space right (like a driver or some other part of the kernel)? Then it can just be treated like a regular kernel->kernel function invocation right? Instead of the syscall mechanism which involves 'interrupting' the kernel, even though it is a soft interrupt, it still is an interrupt.

Keen to hear your thoughts on this