This is only an aesthetic change, some unneeded indentation is remove by
removing some "else" blocks when return statements ensures that path
wouldn't be followed anyway.
The GC can have a lot of pressure if a collection recovers very little
memory, but enough to fulfill the current memory request, causing a lot of
collections with too little gain (when the key to a efficient GC is
recover the bigger amount of memory with as little work as possible).
This effect is greatly reduced when using eager allocation, because
eventually the GC will allocate more memory, but heap minimization can
trigger this effect again.
This patch adds an option, min_free, which specifies the minimum
percentage of heap that should be free after a collection. If the free
heap is less than min_free% of the total heap, the GC will allocate a new
pool, big enough to fulfill that requirement. If the free heap is bigger
than min_free% of the total heap, the GC will try to release some pools to
the OS to keep the heap occupancy near min_free%.
Try to keep the memory usage low more aggressively
Memory usage is minimize()d only when a big allocation is done. This could
be problematic for applications that only perform small objects
allocation, as their memory usage could grow a lot, specially if the eager
allocation option is used.
Trigger memory minimization even for small objects allocation to avoid
that pathological case.
The new option is used to pre-allocate pools of memory at program start.
This could be useful if your program need to load a starting working set
into memory before doing the actual work, as it might avoid some useless
initial collections while the program is just doing allocation but no
memory is yet freed.
The option takes a numeric parameter indicating the size of the initial
pool in MiB. It can optionally specify an arbitrary number of pools to
create by appending "x" and the numeric value of the number of pools. Each
pool will have the specified pool size.
For exmaple, pre_alloc=50 will allocate an initial pool of 50 MiB, while
pre_alloc=5x10 will allocate 10 pools with 5 MiB each. Anything that can't
be parsed correctly (like "", "5x", "5a10", "5x10x", etc.) make the
starts with no pre-allocated memory as usual.
If any stats files (used by collect_stats_file and malloc_stats_file)
can't be created, the program segfauls trying to use a null FILE*.
It's extremely impolite to cause an strange segfault because of this, and
since there is no sensible error reporting mechanism either, we just
ignore those options if the selected files are not writable, as we do
with unknown options or any other wrong option parameters.
Eager allocation consist in allocating a new pool when a collection is
triggered (because an allocation failed to find enough free space in the
current pools). This enables the mutator to keep running when the mark
phase run in a fork()ed process in parallel. This completes the concurrent
GC, decreasing the maximum pause time (not only the stop-the-world time)
dramatically (almost by 2 orders of magnitude).
As a side effect, the total run-time is greatly reduced too because the GC
pressure is reduced due to the extra allocated pools. The number of
collections needed by a program can be reduced 3 to 9 times depending on
the load, which explains the total run-time reduction, even for
single-core environments.
To allow the mutator run in parallel with the mark phase, the freebits
of free pages must be set for *all* the possible block sizes, not only for
the start of the page, because the freebits are used as a starting point
for mark bits and the bin size of a freed page can be changed *after* the
mark phase is started, resulting in an inconsistency between mark bits and
free bits. Pools allocated during a parallel mark set all the mark bits
to avoid the sweep phase freeing those pages after the mark is done.
As a side effect, we can't copy anymore the freebits as the starting mark
bits because we don't keep the freebits updated so precisely, doing so
would need a bit more work.
Since the freebits can be constructed in the mark phase, which we can do
in parallel, this might not be as useful as thought at first.
In changeset b28fd72842fc9ce935bed74f7b2ba79f9cc59711 (Run the mark phase
in a fork()ed process) we inadvertently changed the lazy allocation of
finals bit set to eager allocation.
This change left a lot of finals bit set initialization checks that are
not really needed using eager allocation.
We remove this redundant check as we decided to go with the eager
allocation to trade some space for a little more speed, as the extra
checks takes time and is very rare that a whole pool doesn't have any
blocks that need finalization, making the space saving very rare too.
Lazy allocation can impact too in locality of reference, as is more likely
that all the bit sets are allocated near in space, except for the lazyly
allocated one.
Since the smallest bin size is big enough to store 2 pointers, the free
list is constructed storing the pointer to the pool the bin belongs to,
so we don't have to find the pool when operating with the free bin.
To make this work, the pools have to be stored outside the DynArray (and
store there only pointers) because when inserting sorted, moved pool
addresses are changed, and the stored pool pointer in the free list break.
Check the sentinel invariant in release builds too
When building a release, the sentinel invariant isn't checked, even if the
"sentinel" option is used. This patches checks the invariant always when
the option is activated, abort()ing the program if the invariant fails.
This is the first big step towards a concurrent GC. The mark phase is ran
in a fork()ed process and the world is only stopped to do the fork()
because we need each thread to dump the CPU registers into the stack to
be scanned.
Forking is controlled via the option "no_fork" (which is false by default).
If not enough support from the underlying OS is found (i.e. no fork() or
no shared memory) or if fork() fails, the mark phase fallback to run in
the same process as the mutator (as it was done before this patch).
The mark and freebits bitmaps are shared between the two processes to
communicate the results of the mark phase. The freebits could not be
shared, but in that case the freebits should be set in the mutator process,
making pauses longer. Freebits should be revisited though.
The concurrent GC will fork() to run the collection, so it need to know if
the underlying OS supports it. This patch renames the alloc module to os
to group all needed OS abstractions in one module.
findPool() is one of the most used functions in the GC, usually taking 15%
of the GC time. This patch minimizes it use by converting the functions
findBase() and findSize() to Pool methods, avoiding calling findPool()
twice (in most cases, when calling findBase() or findSize() we already
know the pool).
Making the GC an object makes no sense, since you can't instantiate it
more than once, it just make the code unnecessarily extra indented.
The GC struct now only have attributes (several renamed) and they are only
grouped for clarity (and to make easier to calculate the GC memory
overhead). All methods are converted to free functions that uses a global
instance of the GC struct.
For some unknown reason, the GC implementation was divided in 2, a GC
class and a Gcx struct. This patch unify them in a GC struct. Only code is
moved (and stats are adjusted).
Now D_GC_OPTS accepts a new boolean option: conservative. When true, the
heap is scanned conservatively, even when type information is available.
The option defaults to false.
This patch[1] is based on the patch provided by Vincent Lang (AKA wm4),
which was based on a patch[2] by David Simcha, both published in the bug
3463[3]. The patch needs a patched Tango runtime (which is part of the
patch[1] for the GC) and a patched[4] DMD to work.
The patched DMD passes type information about pointer locations to
allocations, a PointerMap. The PointerMap has a member bits, which is an
array of size_t elements. The first element is the T.sizeof / size_t,
where T is the type being allocated. The next elements are bitmask
indicating words that should be scanned. After that, the next elements
store another bitmask with the information about pointers. A moving
collector could change the value of words marked as pointers, but not
words that should only be scanned (that words are not guaranteed to be
pointers), so a block could only be moved if it's only reachable by
pointers. The pointers information is not used yet by this patch, only the
scan bits are used.
The precise scanning algorithm is extremely simple, and needs a lot of
optimization, this patch was kept simple on purpose. Optimizations will
follow in separated patches.
A pointer to the type information is stored at the end of the allocated
blocks (only if the blocks should be scanned, blocks marked with NO_SCAN
don't need the type information). This wastes some space, and space that
then have to be scanned, which tends to decrease the performance quite
a bit.
* Time spent in the malloc that triggered the collection.
* Time spent with the world stopped.
* Time spent doing the collection.
* Memory info before and after the collection: used, free, overhead and
wasted memory. Used is the memory used by the mutator, free is the
memory the mutator can request, overhead is the memory used by the
collector itself and wasted is memory that is not used by either the
mutator or collector and that can't be requested by the mutator either.
For each malloc() call:
* Time spent.
* Amount of memory requested.
* Attributes of the requested memory.
* A flag to tell if this call triggered a collection.
Statistics collection is controlled via the D_GC_OPTS environment
variable. To collect malloc statistics, use the option
malloc_stats_file, the value is the path to the file where to store the
malloc statistics (the contents will be replaced). To collect garbage
collection statistics, use the option collect_stats_file, the value is
the path to the file where to store the malloc statistics (the contents
will be replaced). The generated files are in CSV format and have
headers that make them self explanatory.
The GC offers a couple of options to debug memory problems, but they are
selectable only at compile-time. Being the GC part of the compiler
runtime, is not very common for the user to recompile the GC when it has
a memory problem, so making this option available always is very
desirable.
This patch allows configuring the GC via environment variables. 4 options
are available: sentinel, mem_stomp, verbose and log file. Only the first
2 are implemented right now.
For example, to check a program using memory stomping and a sentinel, you
can run it like this (using sh):
$ D_GC_OPTS=mem_stop=1:sentinel
As you can see, the value is optional for boolean options.
Even when free() can be called with a null pointer, the extra call might
be significant. On hard GC benchmarks making the test for null in the GC
code (i.e. avoiding the free() call) can reduce the GC time by almost ~5%.
This code will be superseded by the statistic collection code, and it was
unmantained and very probably broken (for example, the file and line
number was never filled in).
Use tango bindings to C standard library functions
As we need to use more libraries it became less practical to maintain our
own set of bindings, and since the GC only works with Tango, it makes
sense to just use Tango bindings.
This will be an inherently concurrent GC, so having a non-threaded version
of it makes no sense. Even more, I think the non-threaded doesn't even
compile.
This distinction is only made by Windows, and adds an extra complexity
that probably doesn't worth it (specially for other OSs, where this adds
a little overhead too, in both space and time).
Other OSs (like Linux) even do all the committing automatically, under the
hood, see:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;\
f=Documentation/vm/overcommit-accounting;hb=HEAD
Almost any system that support valloc() supports mmap(), and being
a deprecated function, it makes not much sense to maintain it as an
allocation method.
To avoid Tango dependency, we need to write our own C-API interface. This
is done in the new gc.libc module. In the future, maybe this module will
use Tango or Phobos accordly, but for now we stay free of dependencies (at
the expense of some extra work).
The code seemed to be broken, since the self thread ID was stored at
initialization and then asserted that the GC always run from that thread,
which seems far from reality (the GC can be invoked by any thread).
The PRINTF version now doesn't print the current thread ID either.
The Concurrent D Garbage Collector (CDGC) is based on the "basic" garbage
collector from the Tango runtime. This first commit is a copy of this GC,
as it is in Tango 0.99.8.
The CDGC is designed only for Linux, at least for now.