2  Using and understanding the Valgrind core

This section describes the Valgrind core services, flags and behaviours. That means it is relevant regardless of what particular tool you are using. A point of terminology: most references to "valgrind" in the rest of this section (Section 2) refer to the valgrind core services.

2.1  What it does with your program

Valgrind is designed to be as non-intrusive as possible. It works directly with existing executables. You don't need to recompile, relink, or otherwise modify, the program to be checked. Simply put valgrind --tool=tool_name at the start of the command line normally used to run the program. For example, if want to run the command ls -l using the heavyweight memory-checking tool Memcheck, issue the command:
valgrind --tool=memcheck ls -l.

Regardless of which tool is in use, Valgrind takes control of your program before it starts. Debugging information is read from the executable and associated libraries, so that error messages and other outputs can be phrased in terms of source code locations (if that is appropriate)

Your program is then run on a synthetic x86 CPU provided by the Valgrind core. As new code is executed for the first time, the core hands the code to the selected tool. The tool adds its own instrumentation code to this and hands the result back to the core, which coordinates the continued execution of this instrumented code.

The amount of instrumentation code added varies widely between tools. At one end of the scale, Memcheck adds code to check every memory access and every value computed, increasing the size of the code at least 12 times, and making it run 25-50 times slower than natively. At the other end of the spectrum, the ultra-trivial "none" tool (a.k.a. Nulgrind) adds no instrumentation at all and causes in total "only" about a 4 times slowdown.

Valgrind simulates every single instruction your program executes. Because of this, the active tool checks, or profiles, not only the code in your application but also in all supporting dynamically-linked (.so-format) libraries, including the GNU C library, the X client libraries, Qt, if you work with KDE, and so on.

If you're using one of the error-detection tools, Valgrind will often detect errors in libraries, for example the GNU C or X11 libraries, which you have to use. You might not be interested in these errors, since you probably have noo control over that code. Therefore, Valgrind allows you to selectively suppress errors, by recording them in a suppressions file which is read when Valgrind starts up. The build mechanism attempts to select suppressions which give reasonable behaviour for the libc and XFree86 versions detected on your machine. To make it easier to write suppressions, you can use the --gen-suppressions=yes option which tells Valgrind to print out a suppression for each error that appears, which you can then copy into a suppressions file.

Different error-checking tools report different kinds of errors. The suppression mechanism therefore allows you to say which tool or tool(s) each suppression applies to.

2.2  Getting started

First off, consider whether it might be beneficial to recompile your application and supporting libraries with debugging info enabled (the -g flag). Without debugging info, the best Valgrind tools will be able to do is guess which function a particular piece of code belongs to, which makes both error messages and profiling output nearly useless. With -g, you'll hopefully get messages which point directly to the relevant source code lines.

Another flag you might like to consider, if you are working with C++, is -fno-inline. That makes it easier to see the function-call chain, which can help reduce confusion when navigating around large C++ apps. For whatever it's worth, debugging OpenOffice.org with Memcheck is a bit easier when using this flag.

You don't have to do this, but doing so helps Valgrind produce more accurate and less confusing error reports. Chances are you're set up like this already, if you intended to debug your program with GNU gdb, or some other debugger.

This paragraph applies only if you plan to use Memcheck: On rare occasions, optimisation levels at -O2 and above have been observed to generate code which fools Memcheck into wrongly reporting uninitialised value errors. We have looked in detail into fixing this, and unfortunately the result is that doing so would give a further significant slowdown in what is already a slow tool. So the best solution is to turn off optimisation altogether. Since this often makes things unmanagably slow, a plausible compromise is to use -O. This gets you the majority of the benefits of higher optimisation levels whilst keeping relatively small the chances of false complaints from Memcheck. All other tools (as far as we know) are unaffected by optimisation level.

Valgrind understands both the older "stabs" debugging format, used by gcc versions prior to 3.1, and the newer DWARF2 format used by gcc 3.1 and later. We continue to refine and debug our debug-info readers, although the majority of effort will naturally enough go into the newer DWARF2 reader.

When you're ready to roll, just run your application as you would normally, but place valgrind --tool=tool_name in front of your usual command-line invocation. Note that you should run the real (machine-code) executable here. If your application is started by, for example, a shell or perl script, you'll need to modify it to invoke Valgrind on the real executables. Running such scripts directly under Valgrind will result in you getting error reports pertaining to /bin/sh, /usr/bin/perl, or whatever interpreter you're using. This may not be what you want and can be confusing. You can force the issue by giving the flag --trace-children=yes, but confusion is still likely.

2.3  The commentary

Valgrind tools write a commentary, a stream of text, detailing error reports and other significant events. All lines in the commentary have following form:
  ==12345== some-message-from-Valgrind

The 12345 is the process ID. This scheme makes it easy to distinguish program output from Valgrind commentary, and also easy to differentiate commentaries from different processes which have become merged together, for whatever reason.

By default, Valgrind tools write only essential messages to the commentary, so as to avoid flooding you with information of secondary importance. If you want more information about what is happening, re-run, passing the -v flag to Valgrind.

You can direct the commentary to three different places:

Here is an important point about the relationship between the commentary and profiling output from tools. The commentary contains a mix of messages from the Valgrind core and the selected tool. If the tool reports errors, it will report them to the commentary. However, if the tool does profiling, the profile data will be written to a file of some kind, depending on the tool, and independent of what --log-* options are in force. The commentary is intended to be a low-bandwidth, human-readable channel. Profiling data, on the other hand, is usually voluminous and not meaningful without further processing, which is why we have chosen this arrangement.

2.4  Reporting of errors

When one of the error-checking tools (Memcheck, Addrcheck, Helgrind) detects something bad happening in the program, an error message is written to the commentary. For example:
  ==25832== Invalid read of size 4
  ==25832==    at 0x8048724: BandMatrix::ReSize(int, int, int) (bogon.cpp:45)
  ==25832==    by 0x80487AF: main (bogon.cpp:66)
  ==25832==    by 0x40371E5E: __libc_start_main (libc-start.c:129)
  ==25832==    by 0x80485D1: (within /home/sewardj/newmat10/bogon)
  ==25832==  Address 0xBFFFF74C is not stack'd, malloc'd or free'd

This message says that the program did an illegal 4-byte read of address 0xBFFFF74C, which, as far as Memcheck can tell, is not a valid stack address, nor corresponds to any currently malloc'd or free'd blocks. The read is happening at line 45 of bogon.cpp, called from line 66 of the same file, etc. For errors associated with an identified malloc'd/free'd block, for example reading free'd memory, Valgrind reports not only the location where the error happened, but also where the associated block was malloc'd/free'd.

Valgrind remembers all error reports. When an error is detected, it is compared against old reports, to see if it is a duplicate. If so, the error is noted, but no further commentary is emitted. This avoids you being swamped with bazillions of duplicate error reports.

If you want to know how many times each error occurred, run with the -v option. When execution finishes, all the reports are printed out, along with, and sorted by, their occurrence counts. This makes it easy to see which errors have occurred most frequently.

Errors are reported before the associated operation actually happens. If you're using a tool (Memcheck, Addrcheck) which does address checking, and your program attempts to read from address zero, the tool will emit a message to this effect, and the program will then duly die with a segmentation fault.

In general, you should try and fix errors in the order that they are reported. Not doing so can be confusing. For example, a program which copies uninitialised values to several memory locations, and later uses them, will generate several error messages, when run on Memcheck. The first such error message may well give the most direct clue to the root cause of the problem.

The process of detecting duplicate errors is quite an expensive one and can become a significant performance overhead if your program generates huge quantities of errors. To avoid serious problems here, Valgrind will simply stop collecting errors after 300 different errors have been seen, or 30000 errors in total have been seen. In this situation you might as well stop your program and fix it, because Valgrind won't tell you anything else useful after this. Note that the 300/30000 limits apply after suppressed errors are removed. These limits are defined in core.h and can be increased if necessary.

To avoid this cutoff you can use the --error-limit=no flag. Then Valgrind will always show errors, regardless of how many there are. Use this flag carefully, since it may have a dire effect on performance.

2.5  Suppressing errors

The error-checking tools detect numerous problems in the base libraries, such as the GNU C library, and the XFree86 client libraries, which come pre-installed on your GNU/Linux system. You can't easily fix these, but you don't want to see these errors (and yes, there are many!) So Valgrind reads a list of errors to suppress at startup. A default suppression file is cooked up by the ./configure script when the system is built.

You can modify and add to the suppressions file at your leisure, or, better, write your own. Multiple suppression files are allowed. This is useful if part of your project contains errors you can't or don't want to fix, yet you don't want to continuously be reminded of them.

Note: By far the easiest way to add suppressions is to use the --gen-suppressions=yes flag described in this section.

Each error to be suppressed is described very specifically, to minimise the possibility that a suppression-directive inadvertantly suppresses a bunch of similar errors which you did want to see. The suppression mechanism is designed to allow precise yet flexible specification of errors to suppress.

If you use the -v flag, at the end of execution, Valgrind prints out one line for each used suppression, giving its name and the number of times it got used. Here's the suppressions used by a run of valgrind --tool=memcheck ls -l:

  --27579-- supp: 1 socketcall.connect(serv_addr)/__libc_connect/__nscd_getgrgid_r
  --27579-- supp: 1 socketcall.connect(serv_addr)/__libc_connect/__nscd_getpwuid_r
  --27579-- supp: 6 strrchr/_dl_map_object_from_fd/_dl_map_object

Multiple suppressions files are allowed. By default, Valgrind uses $PREFIX/lib/valgrind/default.supp. You can ask to add suppressions from another file, by specifying --suppressions=/path/to/file.supp.

If you want to understand more about suppressions, look at an existing suppressions file whilst reading the following documentation. The file glibc-2.2.supp, in the source distribution, provides some good examples.

Each suppression has the following components:

A suppression only suppresses an error when the error matches all the details in the suppression. Here's an example:

  {
    __gconv_transform_ascii_internal/__mbrtowc/mbtowc
    Memcheck:Value4
    fun:__gconv_transform_ascii_internal
    fun:__mbr*toc
    fun:mbtowc
  }

What is means is: for Memcheck only, suppress a use-of-uninitialised-value error, when the data size is 4, when it occurs in the function __gconv_transform_ascii_internal, when that is called from any function of name matching __mbr*toc, when that is called from mbtowc. It doesn't apply under any other circumstances. The string by which this suppression is identified to the user is __gconv_transform_ascii_internal/__mbrtowc/mbtowc.

(See this section for more details on the specifics of Memcheck's suppression kinds.)

Another example, again for the Memcheck tool:

  {
    libX11.so.6.2/libX11.so.6.2/libXaw.so.7.0
    Memcheck:Value4
    obj:/usr/X11R6/lib/libX11.so.6.2
    obj:/usr/X11R6/lib/libX11.so.6.2
    obj:/usr/X11R6/lib/libXaw.so.7.0
  }

Suppress any size 4 uninitialised-value error which occurs anywhere in libX11.so.6.2, when called from anywhere in the same library, when called from anywhere in libXaw.so.7.0. The inexact specification of locations is regrettable, but is about all you can hope for, given that the X11 libraries shipped with Red Hat 7.2 have had their symbol tables removed.

Note -- since the above two examples did not make it clear -- that you can freely mix the obj: and fun: styles of description within a single suppression record.

2.6  Command-line flags for the Valgrind core

As mentioned above, Valgrind's core accepts a common set of flags. The tools also accept tool-specific flags, which are documented seperately for each tool. You invoke Valgrind like this:
  valgrind --tool=tool_name [options-for-Valgrind] your-prog [options for your-prog]

Valgrind's default settings succeed in giving reasonable behaviour in most cases. We group the available options by rough categories.

Tool-selection option

The single most important option. If you omit this option, the default tool is Memcheck.

Basic Options

These options work with all tools.

Error-related options

These options are used by all tools that can report errors, e.g. Memcheck, but not Cachegrind.

malloc()-related options

For tools that use their own version of malloc() (e.g. Memcheck and Addrcheck), the following options apply.

Rare options

These options apply to all tools, as they affect certain obscure workings of the Valgrind core. Most people won't need to use these. There are also some options for debugging Valgrind itself. You shouldn't need to use them in the normal run of things. Nevertheless:

Setting default options

Note that Valgrind also reads options from three places:

These are processed in the given order, before the command-line options. Options processed later override those processed earlier; for example, options in ./.valgrindrc will take precedence over those in ~/.valgrindrc. The first two are particularly useful for setting the default tool to use.

Any tool-specific options put in $VALGRIND_OPTS or the .valgrindrc files should be prefixed with the tool name and a colon. For example, if you want Memcheck to always do leak checking, you can put the following entry in ~/.valgrindrc:

    --memcheck:leak-check=yes
This will be ignored if any tool other than Memcheck is run. Without the memcheck: part, this will cause problems if you select other tools that don't understand --leak-check=yes.

2.7  The Client Request mechanism

Valgrind has a trapdoor mechanism via which the client program can pass all manner of requests and queries to Valgrind and the current tool. Internally, this is used extensively to make malloc, free, signals, threads, etc, work, although you don't see that.

For your convenience, a subset of these so-called client requests is provided to allow you to tell Valgrind facts about the behaviour of your program, and conversely to make queries. In particular, your program can tell Valgrind about changes in memory range permissions that Valgrind would not otherwise know about, and so allows clients to get Valgrind to do arbitrary custom checks.

Clients need to include a header file to make this work. Which header file depends on which client requests you use. Some client requests are handled by the core, and are defined in the header file valgrind/valgrind.h. Tool-specific header files are named after the tool, e.g. valgrind/memcheck.h. All header files can be found in the include/valgrind directory of wherever Valgrind was installed.

The macros in these header files have the magical property that they generate code in-line which Valgrind can spot. However, the code does nothing when not run on Valgrind, so you are not forced to run your program on Valgrind just because you use the macros in this file. Also, you are not required to link your program with any extra supporting libraries. The code left in your binary has minimal performance impact.

If you really wish to compile out the client requests, you can compile with -DNVALGRIND (analogous to -DNDEBUG's effect on assert()).

You are encouraged to copy the valgrind/*.h headers into your project's include directory, so your program doesn't have a compile-time dependency on Valgrind being installed. The Valgrind headers, unlike the rest of the code, is under a BSD-style license so you may include them without worrying about license incompatibility. The macros in valgrind/*.h will be forwards and backwards compatible across all versions of Valgrind (from 2.0.0 onwards); at worst a macro will do nothing.

Here is a brief description of the macros available in valgrind.h, which work with more than one tool (see the tool-specific documentation for explanations of the tool-specific macros).

Note that valgrind.h is included by all the tool-specific header files (such as memcheck.h), so you don't need to include it in your client if you include a tool-specific header.

2.8  Support for Threads

Valgrind supports programs which use POSIX pthreads. However, it runs multi-threaded programs in such a way that only one thread runs at a time. This approach avoids the horrible implementation problems of implementing a truly multiprocessor version of Valgrind, but it does mean that threaded apps only utilise one CPU, even if you have a multiprocessor machine.

Your program will use the native libpthread, but not all of its facilities will work. In particular, process-shared synchronization WILL NOT WORK. They rely on special atomic instruction sequences which Valgrind does not emulate in a way which works between processes. Unfortunately there's no way for Valgrind to warn when this is happening, and such calls will mostly work; it's only when there's a race will it fail.

Valgrind also supports direct use of the clone() system call, futex() and so on. clone() is supported where either everything is shared (a thread) or nothing is shared (fork-like); partial sharing will fail. Again, any use of atomic instruction sequences in shared memory between processes will not work.

Valgrind schedules your threads in a round-robin fashion, with all threads having equal priority. It switches threads every 50000 basic blocks (typically around 300000 x86 instructions), which means you'll get a much finer interleaving of thread executions than when run natively. This in itself may cause your program to behave differently if you have some kind of concurrency, critical race, locking, or similar, bugs.

2.9  Handling of signals

Valgrind has a fairly complete signal implementation. It should be able to cope with any valid use of signals.

If you're using signals in clever ways (for example, catching SIGSEGV, modifying page state and restarting the instruction), you're probably relying on precise exceptions. In this case, you will need to use --single-step=yes.

If your program dies as a result of a fatal core-dumping signal, Valgrind will generate its own core file (vgcore.pidNNNNN) containing your program's state. You may use this core file for post-mortem debugging with gdb or similar. (Note: it will not generate a core if your core dump size limit is 0.)

2.10  Building and installing

We now use the standard Unix ./configure, make, make install mechanism, and I have attempted to ensure that it works on machines with kernel 2.4 or 2.6 and glibc 2.2.X or 2.3.X. I don't think there is much else to say.

There are two options (in addition to the usual --prefix= which affect how Valgrind is built:

--enable-pie
PIE stands for "position-independent executable". This is enabled by default if your toolchain supports it. PIE allows Valgrind to place itself as high as possible in memory, giving your program as much address space as possible. It also allows Valgrind to run under itself. If PIE is disabled, Valgrind loads at a default address which is suitable for most systems. This is also useful for debugging Valgrind itself.
--enable-tls
TLS (Thread Local Storage) is a relatively new mechanism which requires compiler, linker and kernel support. Valgrind automatically test if TLS is supported and enable this option. Sometimes it cannot test for TLS, so this option allows you to override the automatic test.

The configure script tests the version of the X server currently indicated by the current $DISPLAY. This is a known bug. The intention was to detect the version of the current XFree86 client libraries, so that correct suppressions could be selected for them, but instead the test checks the server version. This is just plain wrong.

If you are building a binary package of Valgrind for distribution, please read README_PACKAGERS. It contains some important information.

Apart from that there is no excitement here. Let me know if you have build problems.

2.11  If you have problems

Contact us at valgrind.kde.org.

See this section for the known limitations of Valgrind, and for a list of programs which are known not to work on it.

The translator/instrumentor has a lot of assertions in it. They are permanently enabled, and I have no plans to disable them. If one of these breaks, please mail us!

If you get an assertion failure on the expression chunkSane(ch) in vg_free() in vg_malloc.c, this may have happened because your program wrote off the end of a malloc'd block, or before its beginning. Valgrind should have emitted a proper message to that effect before dying in this way. This is a known problem which I should fix.

Read the file FAQ.txt in the source distribution, for more advice about common problems, crashes, etc.

2.12  Limitations

The following list of limitations seems depressingly long. However, most programs actually work fine.

Valgrind will run x86-GNU/Linux ELF dynamically linked binaries, on a kernel 2.4.X or 2.6.X system, subject to the following constraints:

Programs which are known not to work are: Known platform-specific limitations, as of release 1.0.0:

2.13  How it works -- a rough overview

Some gory details, for those with a passion for gory details. You don't need to read this section if all you want to do is use Valgrind. What follows is an outline of the machinery. A more detailed (and somewhat out of date) description is to be found here.

2.13.1  Getting started

Valgrind is compiled into two executables: valgrind, and stage2. Valgrind is a statically-linked executable which loads at the normal address (0x8048000). Stage2 is a normal dynamically-linked executable; it is either linked to load at a high address (0xb8000000) or is a Position Independent Executable.

Valgrind (also known as stage1):

  1. Decides where to load stage2
  2. Pads the address space with mmap, leaving holes only where stage2 should load.
  3. Loads stage2 in the same manner as execve() would, but "manually".
  4. Jumps to the start of stage2

Once stage2 is loaded, it uses dlopen() to load the Tool, unmaps all traces of stage1, initializes the client's state, and starts the synthetic CPU.

Each thread runs in its own kernel thread, and loops in VG_(schedule) as it runs. When the thread terminates, VG_(schedule) returns. Once all the threads have terminated, Valgrind as a whole exits.

Each thread also has two stacks. One is the client's stack, which is manipulated with the client's instructions. The other is Valgrind's internal stack, which is used by all Valgrind's code on behalf of that thread. It is important to not get them confused.

2.13.2  The translation/instrumentation engine

Valgrind does not directly run any of the original program's code. Only instrumented translations are run. Valgrind maintains a translation table, which allows it to find the translation quickly for any branch target (code address). If no translation has yet been made, the translator - a just-in-time translator - is summoned. This makes an instrumented translation, which is added to the collection of translations. Subsequent jumps to that address will use this translation.

Valgrind no longer directly supports detection of self-modifying code. Such checking is expensive, and in practice (fortunately) almost no applications need it. However, to help people who are debugging dynamic code generation systems, there is a Client Request (basically a macro you can put in your program) which directs Valgrind to discard translations in a given address range. So Valgrind can still work in this situation provided the client tells it when code has become out-of-date and needs to be retranslated.

The JITter translates basic blocks -- blocks of straight-line-code -- as single entities. To minimise the considerable difficulties of dealing with the x86 instruction set, x86 instructions are first translated to a RISC-like intermediate code, similar to sparc code, but with an infinite number of virtual integer registers. Initially each insn is translated seperately, and there is no attempt at instrumentation.

The intermediate code is improved, mostly so as to try and cache the simulated machine's registers in the real machine's registers over several simulated instructions. This is often very effective. Also, we try to remove redundant updates of the simulated machines's condition-code register.

The intermediate code is then instrumented, giving more intermediate code. There are a few extra intermediate-code operations to support instrumentation; it is all refreshingly simple. After instrumentation there is a cleanup pass to remove redundant value checks.

This gives instrumented intermediate code which mentions arbitrary numbers of virtual registers. A linear-scan register allocator is used to assign real registers and possibly generate spill code. All of this is still phrased in terms of the intermediate code. This machinery is inspired by the work of Reuben Thomas (Mite).

Then, and only then, is the final x86 code emitted. The intermediate code is carefully designed so that x86 code can be generated from it without need for spare registers or other inconveniences.

The translations are managed using a traditional LRU-based caching scheme. The translation cache has a default size of about 14MB.

2.13.3  Tracking the status of memory

Each byte in the process' address space has nine bits associated with it: one A bit and eight V bits. The A and V bits for each byte are stored using a sparse array, which flexibly and efficiently covers arbitrary parts of the 32-bit address space without imposing significant space or performance overheads for the parts of the address space never visited. The scheme used, and speedup hacks, are described in detail at the top of the source file vg_memory.c, so you should read that for the gory details.

2.13.4 System calls

All system calls are intercepted. The memory status map is consulted before and updated after each call. It's all rather tiresome. See coregrind/vg_syscalls.c for details.

2.13.5  Signals

All signal-related system calls are intercepted. If the client program is trying to set a signal handler, Valgrind makes a note of the handler address and which signal it is for. Valgrind then arranges for the same signal to be delivered to its own handler.

When such a signal arrives, Valgrind's own handler catches it, and notes the fact. At a convenient safe point in execution, Valgrind builds a signal delivery frame on the client's stack and runs its handler. If the handler longjmp()s, there is nothing more to be said. If the handler returns, Valgrind notices this, zaps the delivery frame, and carries on where it left off before delivering the signal.

The purpose of this nonsense is that setting signal handlers essentially amounts to giving callback addresses to the Linux kernel. We can't allow this to happen, because if it did, signal handlers would run on the real CPU, not the simulated one. This means the checking machinery would not operate during the handler run, and, worse, memory permissions maps would not be updated, which could cause spurious error reports once the handler had returned.

An even worse thing would happen if the signal handler longjmp'd rather than returned: Valgrind would completely lose control of the client program.

Upshot: we can't allow the client to install signal handlers directly. Instead, Valgrind must catch, on behalf of the client, any signal the client asks to catch, and must delivery it to the client on the simulated CPU, not the real one. This involves considerable gruesome fakery; see vg_signals.c for details.

2.14  An example run

This is the log for a run of a small program using Memcheck The program is in fact correct, and the reported error is as the result of a potentially serious code generation bug in GNU g++ (snapshot 20010527).
sewardj@phoenix:~/newmat10$
~/Valgrind-6/valgrind -v ./bogon 
==25832== Valgrind 0.10, a memory error detector for x86 RedHat 7.1.
==25832== Copyright (C) 2000-2005, and GNU GPL'd, by Julian Seward.
==25832== Startup, with flags:
==25832== --suppressions=/home/sewardj/Valgrind/redhat71.supp
==25832== reading syms from /lib/ld-linux.so.2
==25832== reading syms from /lib/libc.so.6
==25832== reading syms from /mnt/pima/jrs/Inst/lib/libgcc_s.so.0
==25832== reading syms from /lib/libm.so.6
==25832== reading syms from /mnt/pima/jrs/Inst/lib/libstdc++.so.3
==25832== reading syms from /home/sewardj/Valgrind/valgrind.so
==25832== reading syms from /proc/self/exe
==25832== loaded 5950 symbols, 142333 line number locations
==25832== 
==25832== Invalid read of size 4
==25832==    at 0x8048724: _ZN10BandMatrix6ReSizeEiii (bogon.cpp:45)
==25832==    by 0x80487AF: main (bogon.cpp:66)
==25832==    by 0x40371E5E: __libc_start_main (libc-start.c:129)
==25832==    by 0x80485D1: (within /home/sewardj/newmat10/bogon)
==25832==    Address 0xBFFFF74C is not stack'd, malloc'd or free'd
==25832==
==25832== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
==25832== malloc/free: in use at exit: 0 bytes in 0 blocks.
==25832== malloc/free: 0 allocs, 0 frees, 0 bytes allocated.
==25832== For a detailed leak analysis, rerun with: --leak-check=yes
==25832==
==25832== exiting, did 1881 basic blocks, 0 misses.
==25832== 223 translations, 3626 bytes in, 56801 bytes out.

The GCC folks fixed this about a week before gcc-3.0 shipped.

2.15  Warning messages you might see

Most of these only appear if you run in verbose mode (enabled by -v):