Quantifying Nothing At All
There is a commonly-cited paper which is used to back claims such as "garbage collection requires 6 times as much memory as manual memory management." Somehow it passed peer review, and it is even referenced by the Garbage Collection Handbook despite the bad news. But it is based on a fundamentally impossible test.
The paper uses an "oracular memory manager" to automatically generate points to free objects in Java code, which does not expose manually freeing objects (though no one would write using manual freeing when garbage collection is an option). An oracle is a machine that we can consult for answers to non-decidable problems basically - the authors use such an oracle to decide when an object can be freed. The oracle is implemented by running through the program with some given input, probing when objects are allocated and when they can be freed.
The baseline program is thus only going to work for the provided input (or any other input which causes the same allocation patterns), and of course it can only work with deterministic programs, so threading is out of the question. Running a program once on a given input to collect information for a faster run on the same input is a fundamentally useless endeavour, so we may conclude that it is really only a useful technique for benchmarking already.
But, with such information, it should be no surprise that an oracular
memory management system is going to be faster! With such clairvoyance
we could still pull more tricks. We may as well statically decide on
which addresses objects should be allocated to, and cut out the
middle-malloc
-man. But that would probably look too sketchy. If you
have that kind of knowledge, you may as well replace the program with
a program that returns constant output, and get a close-to-infinite
speedup. That would be very sketchy - but neither "optimisation" is
not so far detached from using oracular memory management to represent
anything in a benchmark.
Really, the conclusion should be "the ideal memory manager with perfect knowledge of program execution would match the same performance of a memory manager with very little knowledge about the program with a sixth of the memory". And then the response would be "sure, but why the hell is that an interesting statistic?" Of course, it's not interesting at all.