As a project about two years back a group of us students made this Java applet for showing how different tree algorithms work. There are two different balanced trees presented along with a traditional BST. Hope somebody will find it useful.
As a tip for the applet discussed in my last post, you can add random nodes by just clicking the Add button. Also please play around with the animation and code speed sliders as they play an important role in being able to understand whats happening.
There is some evidence suggesting that humans have an even less efficient search algorithm than a linear search. How is that possible?
Given the (memorised, not read) datasets [3], [3 8], [3 8 9], [3 8 9 4], …, [3 8 9 4 6 0 7 5] if you’re asked to search for 1, the time it’ll take you to say the one isn’t there increases linearly.* Now, if you were asked to search for 4 (assuming it’s in the lists), you would think you’d just remember it was the first one and stop there, but the time it takes to discover it’s in the list increases linearly in almost the same pattern as saying it’s not there. One possible explanation for this is that you always do an exhaustive search, regardless of whether you’ve found it…
* Note that I’m not talking about a set of sets, each element a subset of the next, but just the idea that there’s a linear increase in the size of the set and 1 still doesn’t exist in them.
I taught a few hoby classes a few years ago and I can just say that the ‘aha’-factor for binary search trees was nothing too what came out of heaps, treaps, different types of balanced trees (a bit of a recomendation – don’t use red black trees when tutoring young students – it is hard and requires a lot from the students; take AA-trees for a spin: http://user.it.uu.se/~arnea/ps/simp.pdf)
One alternative to B-Trees is Skiplists, but I’m having a hard time finding good data about under which circumstances they outperform a B-Tree. I get the feeling that there should be a group of problems where they are superior (The algorithm is really very simple), but I’d have to do some research to prove it. Anyone got some research references on them?
Very well explained. I’ll be happy to read the next blog to learn how to keep the bree sorted/balanced!
On the prevouis post I should say that I don’t understand the 3rd type of optimization. Could you explain it mor in details or post a link to someone who does it?
Search for “AVL Tree” which is basically a balanced binary search tree. Ie, it tries to keep the maximum height of the tree at a log n level by performing a series of rotations when data is inserted or removed.
>Also about GList and GSList , even after being a Gtk
>programmer , I’d say Qt’s data structures are waaaay more
>cooler and faster.
Well, glib has hashtables and trees as well. All for its use.
If you have small lists/arrays you don’t do binary search on them, or put the items in a hash table. A linear search can be very fast if the item counts are small.
Templated data structure are cooler than void* pointers stored
Especially because they can avoid an extra indirection
ie GSList
2 pointers , one for data , one for next
Templated lists
1 next pointer, 1 value …
templates do somethings better … ie give the compiler more information to optimise with … The second data structure can be as much as 50% faster in a tight loop thanks to the fast caches.
1) “Metal-aware” optimizations are indeed very useful at times; however, they often are extemely sensitive to the precise hardware they run on. Even within a single hardware family, moving from one system to another may result in a large drop in performance because of the design differences — even if the new system is supposedly “higher performance” than the original.
2) In the context of this article, one such optimization that isn’t mentioned here is branch prediction optimization. Balanced binary trees tend to eliminate the effectiveness of branch prediction optimization, since they are designed to keep the probability of a branch very close to 50%.
Although I have not done the experimental work yet, I have given this quite a bit of thought. If the tree were only approximately balanced, given some “skew factor” favoring either left or right branches, then branch prediction optimization could improve performance. Of course, too much skew would lengthen the average path enough to eliminate any benefit gained, and could easily increase the cost per search. In addition, the introduction of a skew factor requires some measure of runtime overhead which starts the entire process at a performance loss before the optimal skew value is found.
The question is, “Will this approach result in a net performance optimization?” The next question would be “under what conditions would it result in the optimization?”
The function for computing the cost includes as parameters:
o The number of nodes in the tree
o The cost of traversing from one node to another
o The cost of applying the “skew” itself
o The cost of a comparison
Skewing arguable favors large number of nodes, since this will best exercise the branch opimization.
The cost of both traversing and comparison implicitly includes the cache miss costs. Higher costs here will penalize skewing, since skew will increase the average traversal length.
The cost of applying the skew relates to the number of updates to the tree, versus the number of lookups, since the skew application only applies to the updates; lookups benefit from the skewing for the branch prediction. Thus, high lookup/update ratios allow for higher costs in applying the skew.
Now, if skewing does improve performance, and the results can tolerate a more expensive skew application, then it may be possible to implement a dynamic skewing calculation which determines the nature of this skewing at runtime. This would then give a measure of portability to the optimization, and provide platform-independent optimization.
As a project about two years back a group of us students made this Java applet for showing how different tree algorithms work. There are two different balanced trees presented along with a traditional BST. Hope somebody will find it useful.
http://www.engin.umd.umich.edu/CIS/
As a tip for the applet discussed in my last post, you can add random nodes by just clicking the Add button. Also please play around with the animation and code speed sliders as they play an important role in being able to understand whats happening.
Jeff give a full url, it’s damn hard to navigate through this website.
There is some evidence suggesting that humans have an even less efficient search algorithm than a linear search. How is that possible?
Given the (memorised, not read) datasets [3], [3 8], [3 8 9], [3 8 9 4], …, [3 8 9 4 6 0 7 5] if you’re asked to search for 1, the time it’ll take you to say the one isn’t there increases linearly.* Now, if you were asked to search for 4 (assuming it’s in the lists), you would think you’d just remember it was the first one and stop there, but the time it takes to discover it’s in the list increases linearly in almost the same pattern as saying it’s not there. One possible explanation for this is that you always do an exhaustive search, regardless of whether you’ve found it…
* Note that I’m not talking about a set of sets, each element a subset of the next, but just the idea that there’s a linear increase in the size of the set and 1 still doesn’t exist in them.
learn c programming with algo..&data..
lazy.. guilty..
The article just discusses Binary Trees as taught in any decent CS syllabus
And lastly why is he saying better algorithm and then go on to describe a better data structure ?
Also about GList and GSList , even after being a Gtk programmer , I’d say Qt’s data structures are waaaay more cooler and faster.
This article is not cool – unless you skipped college
hmm… But I should say I flunked my Algorithms once .. (Indian colleges are really tough)
Thi is exactly what I learned in some of my logic courses at college. So how is this better? Its the same old same old.
“Thi is exactly what I learned in some of my logic courses at college. So how is this better? Its the same old same old.”
You obviously didn’t read the paragraph that said it was for beginners and that most Gnome hackers would already know about binary search trees.
He wasn’t trying to teach you anything. He’s trying to teach, I would guess, script language programmers who just started making little applications.
I hope he follows up on this with some corollary.
I taught a few hoby classes a few years ago and I can just say that the ‘aha’-factor for binary search trees was nothing too what came out of heaps, treaps, different types of balanced trees (a bit of a recomendation – don’t use red black trees when tutoring young students – it is hard and requires a lot from the students; take AA-trees for a spin: http://user.it.uu.se/~arnea/ps/simp.pdf)
1. algorithmic
2. bare metal (assembler)
3. metal-aware (page faults, TLB misses, cache lines, stalls)
I’ve found the 3rd type to be highly productive.
Avoid touching memory, and especially avoid touching
cold (not recently used) memory. Be careful about
alignment. Sometimes power-of-two data structures
will hurt your cache usage patterns due to having
N-way associativity instead of full associativity.
A nice beginner article.
One alternative to B-Trees is Skiplists, but I’m having a hard time finding good data about under which circumstances they outperform a B-Tree. I get the feeling that there should be a group of problems where they are superior (The algorithm is really very simple), but I’d have to do some research to prove it. Anyone got some research references on them?
Very well explained. I’ll be happy to read the next blog to learn how to keep the bree sorted/balanced!
On the prevouis post I should say that I don’t understand the 3rd type of optimization. Could you explain it mor in details or post a link to someone who does it?
http://www.axcs.org/pat/index.php?p=3
Kind of light on the technical details, but a binary search isn’t very hard to implement.
Search for “AVL Tree” which is basically a balanced binary search tree. Ie, it tries to keep the maximum height of the tree at a log n level by performing a series of rotations when data is inserted or removed.
you can admit that someone else has already done a better job than you can, and download this: http://judy.sf.net
It seem a nice dats structure that is highly underrated..
Anyway, nice article for nubies
Sorry, the full URL is http://www.engin.umd.umich.edu/CIS/course.des/cis350/treetool/index…
>Also about GList and GSList , even after being a Gtk
>programmer , I’d say Qt’s data structures are waaaay more
>cooler and faster.
Well, glib has hashtables and trees as well. All for its use.
If you have small lists/arrays you don’t do binary search on them, or put the items in a hash table. A linear search can be very fast if the item counts are small.
Templated data structure are cooler than void* pointers stored
Especially because they can avoid an extra indirection
ie GSList
2 pointers , one for data , one for next
Templated lists
1 next pointer, 1 value …
templates do somethings better … ie give the compiler more information to optimise with … The second data structure can be as much as 50% faster in a tight loop thanks to the fast caches.
A couple of thoughts:
1) “Metal-aware” optimizations are indeed very useful at times; however, they often are extemely sensitive to the precise hardware they run on. Even within a single hardware family, moving from one system to another may result in a large drop in performance because of the design differences — even if the new system is supposedly “higher performance” than the original.
2) In the context of this article, one such optimization that isn’t mentioned here is branch prediction optimization. Balanced binary trees tend to eliminate the effectiveness of branch prediction optimization, since they are designed to keep the probability of a branch very close to 50%.
Although I have not done the experimental work yet, I have given this quite a bit of thought. If the tree were only approximately balanced, given some “skew factor” favoring either left or right branches, then branch prediction optimization could improve performance. Of course, too much skew would lengthen the average path enough to eliminate any benefit gained, and could easily increase the cost per search. In addition, the introduction of a skew factor requires some measure of runtime overhead which starts the entire process at a performance loss before the optimal skew value is found.
The question is, “Will this approach result in a net performance optimization?” The next question would be “under what conditions would it result in the optimization?”
The function for computing the cost includes as parameters:
o The number of nodes in the tree
o The cost of traversing from one node to another
o The cost of applying the “skew” itself
o The cost of a comparison
Skewing arguable favors large number of nodes, since this will best exercise the branch opimization.
The cost of both traversing and comparison implicitly includes the cache miss costs. Higher costs here will penalize skewing, since skew will increase the average traversal length.
The cost of applying the skew relates to the number of updates to the tree, versus the number of lookups, since the skew application only applies to the updates; lookups benefit from the skewing for the branch prediction. Thus, high lookup/update ratios allow for higher costs in applying the skew.
Now, if skewing does improve performance, and the results can tolerate a more expensive skew application, then it may be possible to implement a dynamic skewing calculation which determines the nature of this skewing at runtime. This would then give a measure of portability to the optimization, and provide platform-independent optimization.