A Glance At Garbage Collection In Object-Oriented Languages

Garbage collection (GC) is a technology that frees programmers from the hassle of explicitly managing memory allocation for every object they create. Traditionally, the benefit of this automation has come at the cost of significant overhead. However, more efficient algorithms and techniques, coupled with the increased computational power of computers have made the overhead negligible for all but the most extreme situations.


In fact, in many situations automated memory allocation can be significantly faster and more efficient than traditional approaches. Combined with increased programmer productivity, it would be negligent not to consider automated memory management a core component in nearly all future programming paradigms.


Though recent approaches have expanded the feasibility of an automated memory manager, the basic technology has been around for quite some time. GC was first invented by John McCarthy in 1958 as part of the implementation of Lisp. Though it has gained popularity through Java and .NET, it is included in languages such as Eiffel, Haskell, ML, Modula-3, Perl, Prolog, Scheme, and Smalltalk. Traditionally popular programming languages, such as C/C++, require the programmer to “micro-manage” memory consumption. Manual memory management is enormously tedious and error-prone; however, it can be more efficient in many ways if properly handled. This discrepancy in efficiency has slowed the widespread adoption of the automated approach.

Algorithms and Technologies

Though GC can be handled in a variety of ways, the core idea is the same across all techniques. The goal is differentiate between “garbage” and “non-garbage” objects and automatically free the memory that “garbage” objects are occupying. “Garbage” essentially refers to an object that is unreachable by any other currently reachable object. If it cannot be reached, it is not useable any longer and is therefore considered “garbage.”

Reference Counting

Reference counting is a form of garbage collection where each object stores a count of all the other objects that refer to it. Every time an object is assigned or an operation is performed that would alter a reference to an object, the reference count for all involved objects are appropriately updated. If an object’s reference count decrements to zero, then it is effectively unreachable and considered “garbage.” Though this technique provides a good conceptual example, it is rarely used in practice. Reference counting requires frequent updates and it cannot account for cyclical references, where an object indirectly makes a reference to itself. Additionally, though not nearly as troublesome as the first two problems, reference counting requires every object to reserve a special storage space for a reference count.


The mark-sweep collection technique is one of the most basic tracing garbage collection technologies. Tracing garbage collectors start by considering every object to be garbage and then tracing through every reachable object. Anything that is not traced as reachable is remains “garbage.” A mark-sweep collector is able to recognize objects that have already been flagged as reachable and therefore can properly handle cyclical references. After it marks every reachable object, it then scans through all objects in the heap and frees any object not flagged as reachable. The downside of this approach is that the memory becomes fragmented since objects have been removed sporadically throughout the heap. Also, a mark-sweep collection can be quite resource-intensive, since it is forced to scan every object in memory, regardless of whether or not the object is flagged as “garbage.”


The copying collection technique can be dramatically faster than mark-sweep because it is able to dismiss “garbage” objects without ever scanning or analyzing them. Instead of simply removing “garbage” objects from their position in memory, it relocates all “non-garbage” objects to an entirely new location in memory, trashing everything left-over in the memory. It does this by splitting its allocated memory into two sections: the active section and the unused section. In the active section, objects are stacked on top of each other contiguously, making object allocation extremely efficient. New objects are allocated simply by placing them on top of the existing heap. When it runs out of free memory, the garbage collector traces through all reachable objects and copies them contiguously to the unused section of memory. The active section of memory is still full of reachable and unreachable objects, but that doesn’t matter since all the reachable objects have been copied to the unused section. Then, the active section and unused section switch places, dismissing the previous section of memory. When the new active section becomes full, the process is repeated again and the sections are swapped. This approach has the advantage of providing excellent allocation speed and it virtually eliminates fragmentation. The downside, however, is that long-living objects are continuously swapped between the two sections every time the collector is run. Additionally, it requires twice the amount of memory to run effectively.


To account for the benefits of the copying collector while considering memory constraints and long-lived objects, the mark-compact collector was formed. Though it is a bit more complex than copying collectors, the fundamental idea is the same. There is a definite separation between the active section of the memory and the free/unused section. However, unlike the copying collector, the mark-compact collector scans for all reachable objects and compacts them at the bottom of the heap, leaving the remaining memory free for consumption. The catch is that it doesn’t have to fully compact the memory each time, and since long-lived objects tend to get pushed to the bottom of the heap, they are not as frequently collected and compacted.


To further account for the differentiation between handling short-lived and long-lived objects, the generational approach was developed. In a generational collection, the heap is divided into multiple generations. Objects are created in the base (youngest) generation, and are promoted to higher (older) generations after passing some form of criteria, usually related to the age of the object. Garbage collection can be done at different time intervals and even using different techniques based on the generation the object is in. The only problem with this approach is when an older generation object references a younger generation object. Since the tracing collector does not trace into older generation references, this intergenerational reference gets lost in the mix if it is not referenced by some other object in its own generation. Luckily, there are only two ways for an older object to reference a younger object: either a reference in the older object is modified to refer to a younger object, or a young object that refers to other young objects gets promoted to an older generation. Languages that employ a generational collector account for this problem by maintaining a list of intergenerational references whenever such an occasion occurs.

Parallel Copying

With the advent of multi-processor systems, many garbage collection techniques have needed multi-thread aware implementations. The parallel copying collection is a version of copying collection that runs as many threads as there are CPUs, preventing the workload from being restricted to only one CPU. The standard copying collection is unable to spread the workload to different processors, which creates an unnecessary bottleneck in performance. Compared with traditional copying collection, parallel copying has a potential speed increase by a factor equal to the number of CPUs available.


The concurrent approach was developed to make the application execute as seamlessly as possible alongside (and at the same time as) the garbage collector. In order to accomplish this, it must first halt all other CPU activity to mark all reachable objects. Once it has completed that operation, it creates several garbage collection threads that execute concurrently with the program based on the marked and unmarked objects. Each thread does its part to remove “garbage” objects from memory.

Parallel Scavenge

The parallel scavenge collector is similar to the parallel copying collector; however, it is optimized for large memory heaps (10GB plus) on multiple-CPU systems. The approach is designed to reduce pauses and maximize throughput. It is rarely used on anything but servers.


Though not exactly a complete garbage collection technique, it is important to recognize how object finalization is accounted for in managed memory languages. Traditionally, languages without automated memory management (such as C++) provided destructors for their objects to free any resources that the object might consume. The terminology and syntax for finalization is similar, but the implementation is quite different. Since object “destruction” is not always explicit in different garbage collection techniques (ex: copying collection), the compiler must take extra steps to explicitly call a finalization method when an object is freed from memory. Specifically, it must maintain a list of objects with finalizers, determine when those objects are removed from memory, prevent them from being removed from memory, execute their finalize method, and then place them back into memory to be marked as “garbage” and collected once more. As you can no doubt see, finalization defeats many of the benefits of various garbage collection techniques and should only be used when absolutely necessary.

Java Implementation

Java, specifically more recent versions of the JVM, implements a two-generation (young and old) approach with different collection techniques for each generation. The default collector for the young generation is the copying collector and the default collector for the old generation is the mark-compact collector. For multiple-CPU systems, Java provides the option of a parallel copying collector or a parallel scavenge collector for the young generation and a concurrent collector for the old generation. Since the vast majority of newly created objects “die young,” Java’s use of a copying collector on the young generation and a mark-compact on the older generation appears to provide a good mix of technologies for optimal speeds.

.NET Implementation

Like Java, the .NET platform uses a multi-generational approach to garbage collection. However, it has three generations (0, 1, and 2) instead of two, with the ability to expand to even more in the future if necessary. Furthermore, where Java uses different collection techniques for different generations, .NET currently uses the mark-compact collection technique for all generations. Generation 0 is where newly created objects are placed. Objects are promoted to Generation 1 whenever they survive garbage collection on Generation 0. Similarly, when objects in Generation 1 survive garbage collection they are placed in Generation 2 and objects that survive Generation 2 garbage collection are simply compacted within Generation 2. The optimizations determining which generations to compact when are numerous and frequently changing; however, it is safe to say that Generation 0 is the most frequently compacted generation due to the short lifespan of most objects. To further optimize performance, .NET sections off a part of the heap for large objects and never compacts that section in order to prevent the CPU from wasting cycles on shifting large amounts of memory. Though .NET does not vary the techniques for collection between generations, it is clear that considerable amounts of time have been spent fine-tuning the currently employed technique for peak performance.


Although garbage collection can never be as efficient in every way as a well-programmed “micro-managed memory” application, it has the potential to be as efficient or even more efficient in areas that are becoming increasingly important. For example, though garbage collection might require more memory to store and more CPU time to process, it can effectively compensate in increased memory allocation speed and speedier application development. As memory becomes cheaper and CPU’s become faster, the drawbacks become even less noticeable. Furthermore, garbage collectors are continuously being optimized to bring speeds even closer to those of manual memory management. The ratio of benefit over burden for this incredible technology has finally leveled out and will only go up in the future.


“Automatic Garbage Collection.” Wikipedia, the free encyclopedia. 6 Apr. 2004 http://en.wikipedia.org/wiki/Automatic_garbage_collection.

Richter, Jeffrey. “Garbage Collection: Automatic Memory Management in the Microsoft .NET Framework.” Microsoft Developer Network. 2000. Microsoft. Nov. 2000 http://msdn.microsoft.com/msdnmag/issues/1100/GCI/default.aspx.

Richter, Jeffrey. “Garbage Collection – Part 2: Automatic Memory Management in the Microsoft .NET Framework.” Microsoft Developer Network. 2000. Microsoft. Dec. 2000 http://msdn.microsoft.com/msdnmag/issues/1200/GCI2/default.aspx.

Goetz, Brian. “Java theory and practice: A brief history of garbage collection.” IBM developerWorks. 2003. IBM. 28 Oct. 2003 http://www-106.ibm.com/developerworks/java/library/j-jtp10283/.

Goetz, Brian. “Java theory and practice: Garbage collection in the 1.4.1 JVM.” IBM developerWorks. 2003. IBM. 25 Nov. 2003 http://www-106.ibm.com/developerworks/java/library/j-jtp11253/.

Goetz, Brian. “Java theory and practice: Garbage collection and performance.” IBM developerWorks. 2004. IBM. 27 Jan. 2004 http://www-106.ibm.com/developerworks/library/j-jtp01274/.

“Question of the month: 1.4.1 Garbage collection algorithms.” Java Performance Tuning. 2003. JavaPerformanceTuning.com. 29 Jan. 2003 http://www.javaperformancetuning.com/news/qotm026.shtml.

About the Author

Zachary Pinter is a student attending the University of West Florida in Pensacola, FL. He is an aspiring programmer, OS enthusiast, and subscribed member of OSNews.

If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.


  1. 2004-04-27 5:26 pm
  2. 2004-04-27 5:52 pm
  3. 2004-04-27 5:57 pm
  4. 2004-04-27 6:14 pm
  5. 2004-04-27 6:32 pm
  6. 2004-04-27 6:33 pm
  7. 2004-04-27 6:36 pm
  8. 2004-04-27 7:05 pm
  9. 2004-04-27 7:08 pm
  10. 2004-04-27 8:05 pm
  11. 2004-04-27 8:23 pm
  12. 2004-04-27 8:25 pm
  13. 2004-04-27 8:29 pm
  14. 2004-04-27 9:21 pm
  15. 2004-04-27 9:30 pm
  16. 2004-04-27 9:42 pm
  17. 2004-04-27 11:43 pm
  18. 2004-04-28 1:01 am
  19. 2004-04-28 1:07 am
  20. 2004-04-28 2:45 am
  21. 2004-04-28 3:15 am
  22. 2004-04-28 3:43 am
  23. 2004-04-28 6:49 am
  24. 2004-04-28 6:53 am
  25. 2004-04-28 9:09 am
  26. 2004-04-28 9:58 am
  27. 2004-04-28 11:19 am
  28. 2004-04-28 12:01 pm
  29. 2004-04-28 12:05 pm
  30. 2004-04-28 2:41 pm
  31. 2004-04-28 8:44 pm
  32. 2004-04-28 11:20 pm
  33. 2004-04-29 9:03 am
  34. 2004-04-29 2:26 pm
  35. 2004-04-29 6:35 pm