Powered By Blogger

My Blog

One new concept a day (at the least). One new thing to explore in the exciting world of Computer Science each day. That's the Programmer in Training promise :P

Code and learn, boys. Code and learn.

Search This Blog

Tuesday 9 December 2014

Concept 7: An Intro to AVL Trees

So I recall talking about BSTs once, and ending it on a note of concern. The problem with a BST was that, following its almighty rule, it was still possible to create something akin to a linked list. This completely destroyed the whole reason behind using a BST, which was the fact that a BST can potentially reduce our time for searching for a particular node by a lot, specifically by the factor O(lg n) (and yeah, the lg is not a mistake, it is nothing but the logarithm to the base 2).

So how do we ensure our dear friend the Binary Search Tree retains its O(lg n) time required for searching? Enter the concept of balancing and the world of balanced binary search trees. In this episode, we are gonna be taking a look at the basics of AVL Trees.

Prerequisites

Ah, the prerequisites. Well, for today's post, all you are gonna be needing is BSTs, which you can read about right here. Other than that, you may require to hold on to your wits, because this is not exactly a very simple topic.

The Beginning

In the beginning, there were Binary Search Trees. And they were good. But three dudes whose names started with the letters A, V and L (I could Google their names right now, but I'm actually writing this without an internet connection) realized that they were not good enough! So they set out to create a new data structure, inventively called AVL Trees (and confused some people: why would you do this? Most people (including myself) don't even know what AVL stands for. At least you could have named it after its purpose or something). And then began a revolution of more streamlined and effective algorithms, blah blah blah.

So anyway, cutting to the chase, what necessitated the AVL data structure was outlined, in like the first paragraph of this post. What is more important is HOW it works. The AVL data structure does what we want it to do: reduces searching to a worst-case scenario of O (lg n). What this means is that the BST data structure, that potentially allowed a worst-case O(n) scenario, has been drastically reduced in complexity (hehe, time-wise, m'boy! Programmers always have the worst of it when reducing so-called "complexity"). Therefore, basically we are gonna have faster searches, guaranteed.

What about insertion and deletion? The incredible part about AVL Trees is that even these take O(lg n) time. Sure, it may not sound as good as arrays or linked lists (those have constant-time insertion, but are meh on searching), but this complexity makes AVL Trees pretty frickin' awesome in terms of data structures.

How it works

Okay, so the whole point of this post is to get you acquainted with AVL Trees. I'm not going to cover the mechanics of insertion and deletion here, because those concepts are quite long and hard to wrap your brain around, and have the potential to make this post a gazillion miles long (notwithstanding my penchant for writing, like a lot). However, I will cover the basics of how an AVL Tree looks like and works.

Before we begin, I want to make one thing clear. I am using a Height Convention of 1. What this means is that a single leaf node possesses a height of 1 in my book. Now I know a lot of you have been taught otherwise, with heights of 0 and 2 and whatnot, but forget about all that and follow this convention when reading this post. It will make further discussion a lot more painless.

For those of you who don't know what the height of a BST is, first of all, shame on you. I know I did not cover that in my post on BSTs, but I never said you have to stop there. Google stuff and learn! It's good for you. Secondly, the definition is basically this:

Height (node x) = length of max path down from node x, including x

I am using the "including" convention here. A lot of people use the "excluding" convention (which I believe is the prevailing convention) but I don't like that. According to that, the height of a single leaf node is 0, which is plainly absurd. So no, I am not going down that path.

Anyway, AVL Trees are BSTs (which means the BST Rule applies), with one more rule:

A node in an AVL Tree has the height difference between its left and right subtrees as either 1, 0 or -1.

Confused? Here's what it means. Consider the node A and its subtrees B and C below.

In the above, remember that B and C are the topmost nodes of those subtrees (that's the subtree notation). Now in an AVL tree, height(B) - height(C) = 0 or 1 or -1, and this is true for all nodes A in the AVL tree.

In other words, no matter which node you take, it can have an equal depth on both its children, or it can have a single additional node in the height balance on one of its children. In other words, every node is almost completely balanced.

This rule above makes AVL Trees strictly height-balanced. For example, this is an AVL Tree:

While this is NOT an AVL Tree (spot why!):

This is also an AVL Tree:

While this isn't:

From the above examples, it should be plainly clear that an AVL Tree enforces a sort of "strict" balancing and does not really allow any linked-listey formations (look at the fourth image, immediately above. The thing looks like two linked lists joined at the top). This limits the height of the tree to reasonable proportions depending on the number of nodes. This in turn results in faster searching as we have to traverse fewer levels to reach (or not) the node we want.

Now, while working with AVL Trees, you may wonder how we enforce this height balance. The method is quite simple: we change the root of the node whenever the tree goes out of balance. For a simple example, consider the tree containing three elements: 5, 6 and 7. Now these can be put into a binary tree in three ways:

We see that all three ways are valid BSTs, with elements on the left less than elements on the right. However, out of the three, only the middle one is a valid AVL Tree, or is well-balanced. Therefore, as a rule of thumb, we try to make the horizontally middling value as the root of the tree whenever balancing is concerned.

How that is done is beyond the scope of this post. For now, however, know that this is what we must ensure when dealing with AVL Trees. Just like normal BSTs, an AVL Tree must satisfy the AVL Property at all stages (along with the BST Property itself... AVL Trees are subsets of BSTs).

Worst Case Scenarios

Even in an AVL Tree, there IS a worst case of O lg(n). We will see that this case appears quite definitely in one particular form of an AVL Tree: one where no node has balance factor 0. In other words, the tree is as staggered as it can get: every node has a balance factor of either 1 or -1. This sort of tree is called a Fibonacci Tree and looks sorta like this:


I did not make the above image. This is from Wikipedia. Either way, that shouldn't matter.

For those of you who are too dense to realize it, every node shows the height and not the balance factor of itself in the above image. The balance factor of each node above (except the leaf nodes) is 1.

These nodes have the worst case of O (lg n). Why that is so will be explained when I cover insertion on this blog. Until then, know that AVL Trees, in general, are best suited for purposes where you have a (mostly) static tree in which you do loads of searches. For trees where you modify stuff more often, you use the big brother of all binary trees, the Almighty Red-Black Tree, which I am almost scared to write about because there is so much to write there.

But until then, I have two more posts to go regarding AVL Trees themselves, so I'm gonna get right to that. Meanwhile you guys can go Google it or something. Stay alive, stay alert, keep learning.


PS: That was totally lame.

PPS: I know I have a lot more posts to catch up on (4 more in my reckoning), but I've been having internet troubles for a day or so, so I'm really sorry about that. Plus, my original resolution requires me to learn a new concept everyday, which I am doing. These concepts I am blogging currently are stuff I have learnt back during my semester. The newer concepts are going into the queue and will be here later. 'Til then, ciao!

Thursday 4 December 2014

Concept 6: Asynchronous Programming (nodejs)

Note: From this post on, I am gonna number the concepts for easier organization and reference. The old post titles will also be updated to reflect this.

NodeJS is a platform running on the Chrome V8 Javascript runtime that allows for asynchronous, non-blocking I/O operations for implementing a web server. If that whole thing just sounded like a bunch of Greek to you, I feel you. Jargon-filled websites aren't what I like to see, either. So here I am, trying to explain what is (or rather what I understand of it) the asynchronous programming model and what it means in the context of the world wide web.

Now this is kinda different from the posts I have made so far on this blog, in being that I don't really know a lot of what the hell I'm talking about right now. It's not a concept like algorithms, where it is actually possible to visualize and understand theoretical stuff, despite having... ehh, not much practical knowledge in the scheme of things. This is something I can only learn from experience, and there most definitely have been (and will be) mistakes.

However, on the other hand, this kind of post is EXACTLY what I started this blog for. I'm no algorithm tutor. This blog is meant for me to share what I know, figure out how I think, articulate myself, and record my mistakes here for posterity. So if I have made any mistakes, or have forgotten something, please, please correct me in the comments. I will update the post to reflect the error, and put in the necessary explanations. Plus, you win a free virtual cookie.

Okay, enough rambling. Time to get on with the topic.

Prerequisites

You will need to know some basic JavaScript to follow this post. It would be helpful if you have done some basic webdev and at least know what a server and a client is.

Asynchronous Development: The Basics

A lot of people find it hard to work with asynchronous systems, simply because they are so different from normal, sane, synchronous systems. So what exactly is the difference? Consider the following code in Javascript.

console.log("Hey");
delay(1000); // delays execution for 1000 ms
console.log("What");
delay(1000);
console.log("Up");

With Synchronous code, we can expect the outputs as follows:

After 0 ms:


Hey

After 1000 ms:


Hey
What

After 2000 ms:


Hey
What
Up

However, on Async code, we will see the following:

After 0 ms:


Hey
What
Up

Wait. What?

Yes, all three outputs arrive at the same time (almost), and arrive immediately when the code is run. There is no waiting for those delays. In fact, what happens behind the scenes, is this:


> register that "Hey" has to be printed, start printing
> register that a delay of 1000 ms has to be done, start the delay
> register that "What" has to be printed, start printing
> register that a delay of 1000 ms has to be done, start the delay
> register that "Up" has to be printed, start printing

Execution
---------

> 0 ms: print completed: "Hey"
> 0 ms: print completed: "What
> 0 ms: print completed: "Up"
> 1000 ms: delay completed
> 1000 ms: delay completed

Does the above make sense? No? Well, the process is simple. The server acknowledges that an action has to be done when it encounters the line of code that starts that action. The server always starts that action immediately. When the action completes, its results are printed / states modified, depending on what the function does. In the above example, printing is an (almost) instantaneous task, and hence the print jobs execute immediately. The delay takes longer to complete, and thus run in the background such that both delays end (nearly) simultaneously after 1000 ms of their registering.

Again, wait, what?

Okay, I admit. Maybe I still don't make any sense. In this case, the right thing to do would be to introduce a graphic.

The above diagram shows something called the Event Loop. What the event loop does is to run multiple actions in the background simultaneously. When a function is called, the server registers it in the event loop. When the function is done executing, its results are displayed as required. Note that the timeline displayed is not to scale -- the actions under the 0 ms mark execute almost simultaneously, as do the actions under the 1000 ms mark.

So what is the point of all this? Why code like this? Doesn't this seem like unnecessary confusion for not much of a purpose?

Well, the true use of asynchronous programming is its ability to run multiple actions simultaneously. A little thought can expose the advantages of such a method in the context of web servers. The web server is not required to wait for anything while an action executes in the background: for example, a database query or a file retrieval. Consider the following series of steps in a web server:

- Client requests HTML file a (100 ms)

- Client performs database query to obtain a string for a file b (300 ms)
- > Server uses string to search for file b  and returns it (200 ms)

- Client requests HTML file c (100 ms)

- Client requests JPEG file d (200 ms)

The above series of steps will be executed as such on a synchronous server (Note that steps depending on preceding steps are prefaced with a <):

However, on an asynchronous server, the results are quite different:

As plainly visible, the working of the above code on the async server is a lot faster, with the total latency being equal to just 500 ms compared to the 900 ms required by the synchronous server. This means faster loading times and no requirement to wait for text and other content to load while the page queries the database for some banner ad or something. When two commands do not really depend on each other, they can be executed in parallel. This reduces latency and loading time by a lot, as clearly visible from the example.

What about when the successive command does depend on the current command? In this case, what we must do is to rely on callbacks to execute our code in the correct order, as we want to run some things only after certain conditions are met. For example, in the above example, the search for file b can only be worked on AFTER the database query was completed. In this case, and others that require sequential execution, we use the power of callbacks to get the code to work in the intended manner.

Callbacks work quite like the following example:

In the above example, the following function set is being executed:

X ( Z () {} )
Y ()

According to the above code, Z is registered as the callback function of X. In other words, when X completes execution, the event loop starts execution of Z immediately. This way, we can ensure that Z is executed only after X is done.

However, if we have a lot of sequential code, this can become worrisome if we have to write it using callbacks. Quite soon, your code will become a clustered mess of curly braces, as you will be nesting function inside function inside function.

Not to fear. NodeJS has tons of libraries for that purpose. For example, there's async. These will allow you to list functions within blocks that execute sequentially. However, it is recommended to use asynchronous operations wherever possible, as they increase performance of the NodeJS server.

How does Node implement Async?

Not all operations in NodeJS are async. This is a common source of confusion for a lot of people using Node (including yours truly, when I first started out). For example, the following code will run just fine:

a = b+c;
d = a+c;

If all code ran in an async manner, the above code would use the old value of a while calculating d. This, however, does not happen. The code waits for the first statement to execute before running the second.

However, all time-consuming operations like database and file accesses are generally async in nature. It is often helpful to look at the API of any library you use. As a rule of thumb, functions with callbacks specified as parameters in the API will be asynchronous in nature, though you should always try and test it first to be sure.

That's all on asynchronous operations and NodeJS for now. I know there have been no real NodeJS examples on this, but this post has been aimed primarily at tackling the concept of async development and code. Expect further posts in the future about NodeJS and related technology.


Writer's note: I know this has come two days late, and I sincerely apologize. I have been busy traveling and I've not really had enough time to articulate my thoughts for this purpose. I am currently lagging behind schedule by FOUR posts (Gasp!). I'll try to get back on track over the next few days on this, so stay tuned.

Written with StackEdit.

Concept 5: Kruskal's Algorithm

Yay! A related topic to the previous post! Today, Kruskal's Algorithm for finding Minimum Spanning Trees from weighted, undirected graphs.

Prerequisites

For this topic to make sense, you should be aware of a few things. If I have covered them on this blog, you can check that up, otherwise Google them before continuing! You have been warned.

  • Basics of Graphs: The absolute essentials of graph theory are necessary. Basically, we're gonna be working with undirected, weighted, connected graphs. A graph is a data structure consisting of elements called vertices or nodes and connecting elements called edges.
  • Minimum Spanning Trees: Kruskal's algorithm is used to construct minimum spanning trees from graphs. Read it here.
  • Basics of Set Theory: Okay, this is elementary math. Please tell me you know set theory. Just the definitions, unions, etc. I am not really gonna cover any bit of set theory in this blog. Go open your 8th grade math textbook.
  • Any ONE Sorting Technique: Anything. Quicksort. Merge Sort. Insertion Sort. Heck, even bubble sort. Anything at all goes.

If you have cleared the above prerequisites, you are good to go.

Overview

Kruskal's algorithm works with edges as the emphasis of the graph. The basic idea is quite simple and easy to grasp. Note that the "tree", at the beginning, is empty.

  • Take a list of all the edges
  • Sort them according to weights
  • Taking the next minimum edge, add it to the tree if it does NOT form a cycle with the existing tree.

That's it. This process is repeated for all edges. We can quite plainly see that this will generate a minimum spanning tree, as all edges (and hence all vertices) are covered, and also we make sure we pick the minimum possible edge without forming a cycle every time.

Sounds good? Great. Now tell me, how the heck are you going to check for the existence of a cycle? Remember that the loop step is not really as simple as it sounds: you have to make sure you do not form any cycles.

For example, consider the following simple graph.

Now in the case of the above graph, let us assume weights are in the order a<b<c<d. Hence, according to the logic followed by Kruskal's, a is added first, followed by b, and then c. After this, however, there is a problem. The edge d has two end nodes (remember that an edge only contains three pieces of information: the two end nodes and the weight of itself). These two end nodes are such that one is connected to a and the other is connected to b. Now adding of d is NOT feasible as it will form a cycle: in other words, the two nodes at its ends are already connected by a path: namely a-c-b. The question here is: how do we detect this pre-existing path in the tree?

Enter the Union-Find Algorithm.

The Union-Find Algorithm

What? What the hell? Another algorithm? Weren't we just doing Kruskal's?

Well, yes, yes and yes. The Union-Find Algorithm is essential to the operation of Kruskal's. You'll see why: first we need to forget all about the main Kruskal's operation and focus exclusively on this.

The basis of this start from Set Theory. Yes, that seemingly easy and unimportant section of math (which incidentally has everything to do with the foundations upon which Mathematics itself is built) plays a critical role in the finding of MSTs. The reason for this is simple... in an undirected graph, the whole basis of connectedness is that there is no order, just like sets.

Confused? Okay, let's clear this out. Suppose that whenever two nodes were made connected in the tree, I put them in the same set (initially, assume each node is in its own set with only one element). Now if I take two nodes and try to connect them, there are two possibilites:

  1. The two nodes are in two different sets, which means on connecting them we can merge the two sets they are in, to form one set.
  2. The two nodes are in the SAME set, which means they are already connected by some path.

In Case 1, the two nodes are previously unconnected, so that's fine: we can connect them. But in Case 2, if you try to join them, we see that they are already in the same set, which means they are already (indirectly) connected. BOOM! Cycle! Avoid!

Notice that the fun comes from the fact that sets enforce no order. Therefore we can state that if two elements are in the same set, they are somehow connected. How they are connected is of no concern to us. What matters is that they are already connected and it will not help us by adding another edge, as that will form a cycle.

From the above, we can elaborate a bit more on Kruskal's basic algorithm (Last bit of Kruskal's in this subsection, I promise):

  1. Take all edges in an array
  2. Sort that array (in an ascending order)
  3. Take the next (minimum edge), examine its end nodes:
    1. If they belong to different sets, merge their sets and add that edge to the tree
    2. Otherwise skip the edge
  4. Print the tree obtained.

Our real problem has now been dumbed down considerably. All we have to do is to figure out a method to check which set an element belongs to, and then another method to merge two sets. Here is where the Union-Find Algorithm truly steps in (Haha, I know I gave quite a buildup earlier, but seriously, it does step in here).

The Union-Find Algorithm (for realsies)

The Union-Find algorithm consists of two methods: Union and Find (well, duh). The two methods are used for two different actions: one for merging two sets using the Union operation, and the other to find which set an element belongs to.

Let us discuss the Find operation first. How do we find which set an element lies in? To find this, first we need to address another question: how are we going to store a set in a computer algorithm? Sets are not exactly straightforward to store in a system: a set implies no order between its elements. Also, how are we going to represent a set? For this, let us see how we represent sets in Math.

A set is usually represented by a letter placeholder. In other words, we define sets as, say set P = {1, 2, 3} is a set of 3 elements. In this sense, sets are akin to variables.

However, for our purposes, we know that any one element can only be in one set at a time. This is because our set definition puts all connected elements in the same set. If an element (vertice) x is a part of two different sets A and B, then this means that x is connected to all elements in A as well as all elements in B. This further means that all elements in A are connected to all elements in B (via x). Hence, the two sets A and B are actually the same set, and not two different sets (This, in Mathematics, is called a proof by contradiction, by the way.).

Now as we have quite firmly established that any element can only belong to one set at a time (for the purposes of the Kruskal's algorithm), we can define the set a bit differently. For example, we can call the set P above as "The set containing the element 1", or, "The set containing the element 2", or, "The set containing the element 3"; regardless, we know that any one of these three definitions can only lead to a single, unique set (as every element is in only one set at a time). This solves the problem of defining an arbitrary number of variables to denote a number of sets (We do not know how many sets may be present at any particular step during the program: only that there are n sets in the beginning and 1 set at the end, where n is the number of nodes in the graph).

However, we are faced with a new problem. The set P above can be instead represented by one of its elements. The question is: which element? We now have three different definitions for the same set, because it contains three elements. To solve this, we simply do what the Mafia does: assign rank to elements within a set.

Element Ranking and FIND

What does the Mafia have to do with this, you ask? This is my analogy for a set representation. Consider a gang with x members. Out of the x members, the member who represents the entire gang is the one with the highest rank. The gang may have a hierarchy, with one member being the "rep" (representative: or boss, or superior, or senior, or whatever) of another. This hierarchy extends all the way to the "boss rep", who is identified by two characteristics: he has the highest rank in the gang, and he is his own "rep". In other words, the gang may be represented like this:

In the image, the arrows represent the link upwards in the hierarchy. Members on higher levels have higher ranks than elements on lower levels. Remember that the member at the top is his own representative, it does not mean that he has no rep.

We can store elements in the set similarly, with a rank defined for each element and a "rep" value. As there are a maximum of n elements (where n is the number of vertices in the graph), all we need is an array of n elements, where each element stores the node data, the "rep" index, and the rank.

In this sense, the representative element of the entire set is the element whose "rep" is itself. Thus, we have the first Find algorithm.

Find (x) :
    t := x
    while (t.rep is not t):
        t = t.rep
    return t

Now, consider THIS set:

If we wish to find the gang boss of the above set, starting from the lowest member, you'd have to traverse up a lot of levels. Remember, however, that in our set, order does not really matter. All we need is the main "boss rep", while all levels under it do not really matter in hierarchy. Think of it as a very loosely-organized gang: different people report to different people, but the important thing is that the message finally arrives at the boss.

Now in such a situation, every time a call is made to retrieve the boss of a set via another element, we can reorganize the set to make the element (and all its superiors) report directly to the boss, so that the call requires only a single upward reference to be made the next time it is called on them.

To elucidate, consider the above image. In the call to find the boss, if we reorganize the gang such that it now looks like this:

Any subsequent call on a lower member requires only constant time (1 step) to find the set's boss. The process has been made much faster on subsequent calls on the same elements, as there is now no requirement to go through unnecessary steps through middle-ranking members.

Hence we arrive at the second, faster version of the Find algorithm, the Find Algorithm: Mark 2.

Find (x) :
    # x is the element whose set (boss) we have to find

    if (x is not x.rep):
        x.rep = Find(x.rep) # the recursive call finds the main boss and sets it as the rep of the current element: reorganization

    return x.rep # x's rep is now the main boss: return it

The recursive step in the middle is called the path-compression step. We compress the path recursively for x and all its superiors all the way up to the top and set their representatives as the main boss. When the main boss is encountered, it is equal to its rep, so it returns itself.

The above is the final version of the Find Algorithm.

Gang Fights and UNION

Let's extend the Mafia analogy a bit further: what happens when we force two gangs to get connected? A gang fight, that's what. Except...

In our gang-world, we assume that two gangs fighting is basically a rank-comparison challenge. The one with the higher rank wins. Now this makes sense... except, whose rank do we compare? Here is where our gang-world differs from real life: the whole gang is not involved in the dispute - only the leaders. We basically find the gang bosses of both gangs, compare their ranks, and make the new overall gang boss the one with the higher rank.

This is cool and everything, but what if their ranks are equal? Well, then we introduce some "bias": say, we always pick the first gang to win whenever the ranks are equal for both gang-bosses. However, there is a slight dilemma. The new overall gang-boss (the boss of the first gang) should possess a rank greater than anyone else. So, we simply increase his rank by one, so that his new rank is now greater than his old rival's.

You can see how this gang-merging is similar to the Union procedure in sets. The elements belonging to two disjoint sets are now part of the same set, with the representative element of the new combined set made to the representative element of one of the older sets. In pictures, the Union / Gang-merging procedure is as follows:

Note that the gang bosses don't really have any special properties allotted to them. They are shown in a different style for the purpose of representation.

Therefore, the Union of two different sets with bosses aBoss and bBoss respectively is as follows:

Union (aBoss, bBoss) :
    # inputs: aBoss and bBoss are two elements of DIFFERENT sets
    if aBoss.rank > bBoss.rank :
        bBoss.rep = aBoss
        return
    else if aBoss.rank < bBoss.rank :
        aBoss.rep = bBoss
        return
    else :
        # ranks are equal
        bBoss.rep = aBoss
        aBoss.rank += 1 # increase rank of aBoss
        return

And now, we have completed the Union-Find Algorithm.

I'm baaack! ~ Kruskal's Algorithm

If you think back to when we began, we had ended up with this series of steps for Kruskal's algorithm:

  1. Take all edges in an array
  2. Sort that array (in an ascending order)
  3. Take the next (minimum edge), examine its end nodes:
    1. If they belong to different sets, merge their sets and add that edge to the tree
    2. Otherwise skip the edge
  4. Print the tree obtained.

Following that exact series of steps, we can actually proceed to write Kruskal's algorithm directly. Well, almost.

Almost? Aren't we done? Well, not entirely enough. Look at step 3.1 above, for instance. First, we have to check if the two elements belong to different sets. How do we do that? Well, we've already done that. Remember that a set is defined by the "boss rep" element in it. Therefore, if two elements have the same boss rep, they belong to the same set.

Yeah, now we're done. Proceeding with the algorithm!

Kruskal (A, n) :
    # A is the array containing all edges in the graph
    # n is the no of nodes (vertices)

    T := new tree # declare empty tree
    graphNodes := new Array (n)

    for (i in [0..n-1]):
        graphNodes[i].rank = 0
        graphNodes[i].rep = graphNodes[i]

    qsort(A) # in-place algorithm to sort array A by weight

    for edge in A:
        a := edge.end1 # one endpoint node index
        b := edge.end2 # other endpoint node index

        aBoss := Find (graphNodes[a])
        bBoss := Find (graphNodes[b])

        if aBoss == bBoss :
            # bosses are equal, so same set
            break # skip the edge

        # aBoss and bBoss are different
        T.addEdge (edge)
        Union (aBoss, bBoss) # join the sets

    T.printAll()

Note that the tree T can be something as simple as an array of edges with weights and end nodes recorded. The structure of the tree is left up to the implementation and use.

Time Complexity

The time taken by the above algorithm can be calculated as follows:

  • Line 5: O(1)
  • Line 6 - 10: O(V), where V is the number of nodes
  • Line 12: O(E log(E)), where E is the number of edges (this is true on using Quicksort. Technically this value depends on your sorting technique. Now would be a good time to stop using bubblesort).
  • Line 14 - 27: O(E log(V)) (Don't ask me how this works out. The discussion of complexities of the Find algorithm is pretty... complex. There's an entire chapter dedicated to this in CLRS. Someday, when I figure this out, I'll update this post.)

Summing it all up, we have a total complexity of O(E log(V)).

Kruskal's algorithm works better for sparser graphs, as the operation runs based on number of edges in the graphs. For denser graphs, it would be better to use another algorithm, namely Prim's algorithm. But that discussion, another time.

Written with StackEdit.

Wednesday 3 December 2014

Test Post with StackEdit: Please ignore

I’m just trying the Markdown formatting. Please ignore this post.

Hi 1

Hi 2

Hi 3

Google

b i wow_great_stuff

Mistake Mistake

Written with StackEdit.

Concept 4: Minimum Spanning Trees

From this post on, I've decided to include a little "Prerequisites" section because I don't really cover concepts in any particular order: I just post what I find interesting. However, the prerequisites section WILL include a basic definition. If you are smart enough to figure out the rest from context, good for you. For today, we have Minimum Spanning Trees.

Prerequisites


For this topic to make sense, you should be aware of a few things. If I have covered them on this blog, you can check that up, otherwise Google them before continuing! You have been warned.


  • Basics of Graph Theory: You need to know the absolute essentials of Graphs, because that is where we are gonna construct Minimum Spanning Trees from. We are gonna work with undirected, connected, weighted graphs for this topic. A graph is a data structure consisting of elements called vertices or nodes and connecting elements called edges.
  • Trees: Seriously? Trees? You can't see why that's necessary? Stop gaping and go learn it. Psst: I'm not talking about binary trees. These are trees in general. A tree is a subset of graphs where there are no cycles present.


Begin!


So what IS a Minimum Spanning Tree (MST from here on out)? For this, first we must define what is meant by a spanning tree.

In an undirected, weighted, connected graph (technically, aren't unconnected graphs expressible as multiple separate graphs? But I digress...), a spanning tree is the tree that results when you remove all cycles from the graph, in such a manner that all nodes stay connected. In other words, it is the set of edges that connect all nodes and do not form cycles.

Now for a particular graph, there may be multiple spanning trees possible. Take the graph in question (assume all edges are weighted):


For this graph, this is one possible spanning tree (edges in red):



This is another:



Thus, it is plain that multiple spanning trees may exist for a graph. A spanning tree for 'n' nodes will have 'n-1' edges, however. This is because, if you assumed edges as directional (pointing from parent to child), every node will have ONE parent (no cycles, remember?), except one node, which will have no parent (picture it in your mind: any tree can be "hung" from a single node. That node will act as the "root", while all the other nodes will hang from it as "branches"). Thus, one edge will be associated with every node (the node and its "parent" edge) except the root node. Hence, n-1 edges.

It should be clear that spanning trees can be formed by eliminating cycles from the graph. We have the choice of eliminating cycles by removing any edge in the cycle. So what would happen if we chose to remove the maximum edge every time to eliminate a cycle? We would end up with the set of edges necessary to connect the graph with the minimum possible total sum. THIS is the Minimum Spanning Tree.

The MST is the spanning tree that has the minimum total edge-weight sum of all possible spanning trees. That's pretty much the definition given on Wikipedia, so chew on that.

The MST of... that graph. Wow, that is a BIG graph. (Wikimedia)

The trouble is, there ARE multiple possible MSTs for a graph. For example, take a simple square graph (4 edges, 4 vertices, edges have equal weights). It is possible to generate 4 MSTs here, one each by removing any one edge (actually, in a graph with all edge-weights equal, the number of spanning trees and MSTs are the same: chew on that too). Anyhoo, in most cases it does not matter which MST we end up with, so long as the spanning tree we obtain DOES have the minimum weight sum.

In the case where all edge weights in the graph are unique, it is guaranteed that there is only one unique MST. I don't think I need to explain why that is so. Think about it.

So what's the point?


Why find MSTs? Why do we need these... things? Why aren't we content? What do we seek? What is the meaning of life?

I'm sure you have a lot of questions. I can answer the first one. MSTs are useful wherever you need to minimize some "cost" of connecting multiple units. There is a simple example given over on that Wikipedia page, of a telecommunications company trying to connect telephones.

If that is not enough, MSTs are also useful in shortest-path calculations. What's the minimum amount of road needed to be traversed to go from a city to any other city? A tree guarantees a unique path between any two points along it (think of why that is so). Therefore, the MST contains ALL shortest paths in the graph. All you gotta do is traverse!

Okay, I'm convinced. How do we find them?


Whoa... hold your horses, cowboy. Never said I'd tell you that just yet! The science of finding MSTs is not a small one, and it will be covered in subsequent posts. If you simply cannot wait, however, there exist many algorithms that can aid in their discovery. At the time of writing this post, I know two of them: Kruskal's algorithm and Prim's algorithm. Google 'em, it'll be fun.

Tuesday 2 December 2014

Concept 3: Binary Search Trees

Now I know this article was supposed to be here earlier, prior to rotations, but what can I say? I really wanted to write about rotations, and so I did. However, here's the promised article on Binary Search Trees.

Disclaimer: This is really long, so prepare yourself.

Let's start this off in as generic a manner as possible. What are trees? No, not those trees. I'm talking about the trees in computer science. Now trees are collections of data elements called nodes, where each node is connected to one or more nodes. When I cover graphs on this blog, you will see that trees are just a subset of the larger set of graphs. For now, however, know that the important distinguishing factor of a tree is that a tree does not have cycles; in other words, it is not possible to start from any node, follow its connections and end up back at the same node.

So what is a binary tree? Remember when I said that a tree consists of a collection of nodes that have one or more connections to other nodes? Well, a binary tree is a subset of that definition, in that the connections can be considered directed and also that every node has one incoming connection (except for a special node called the 'root' node) and has zero, one or two outgoing connections.

In pictures, a binary tree looks like this:


Notice how every node has one incoming connection? This property is true for all but the topmost (colored red) node. That node is called the 'root' as it has no 'parent'. Also, every node has zero, one, or two 'children', as shown by the number of outgoing connections. For ease of understanding, the number of outgoing connections for each node has been written in the node. Note that the bottom-most nodes have zero children each. Nodes with no children are termed leaf nodes, thus extending the tree analogy.

So what are Binary Search Trees?


Phew. Finally. Binary Search Trees are nothing but binary trees which follow ONE SIMPLE RULE:

For every node, the LEFT CHILD SUBTREE of the node has a value consists of nodes with values LESS than itself, and the RIGHT CHILD SUBTREE of the node has a value consists of nodes with values GREATER than itself.

(EDIT: An error pointed out by +Aaditya Lakshmanan has now been rectified. Many internets to you!)

The above rule can be specifically stated for just the node and its children, being that a node has a value greater than its left child and lesser than its right child. This specific rule will generalize itself to the above property if the steps for insertion and deletion are followed as below.

In other words, if you take the set of the left child, the node itself, and the right child as {L, N, R}, then L<N<R for all nodes N. Henceforth, I shall call the above rule as the Binary Search Tree Property.

Now in binary trees, we denote a subtree as a triangle with an alphabet, where the alphabet is representative of the root node of that subtree. In other words, the binary tree shown below...


... can be represented as...


... with respect to node A and its child subtrees.

From here on, we shall be using the subtree notation to simplify work and make the representation uncluttered.

Therefore, if the above diagram was a Binary Search Tree, then we would represent the rule as B<A<C. Note that this rule holds for all nodes in the tree which have one or more children.

So what's the point? Why go through the hassle of this rule to store data? Well, the answer is simple. Binary Search Trees have an inherent advantage in guess what? Yes, search. It's, like in the name.

How does this advantage come about? Well, as long as we know the value of what element we are searching for (which, of course, we do), we can quickly search for its existence and position by a series of comparisons. At each comparison, in a well-balanced binary search tree, we can skip past roughly half the remaining elements. How this comes about will be explained.

Insertion


Before we do anything else, we have to learn how to insert a value in a BST (Yes, that is what I shall be calling it from now on). Insertion of a value is not a difficult task, and involves a simple couple of steps:

  • Find the point to insert, by filtering down the tree using the BST Rule.
  • Insert the node as a new leaf.

Yeah, it is that simple. All you have to do is to find that point. Now in order to do that, recall that the BST Property applies for all nodes, without exception. In other words, you can treat any node as the root of its own subtree and the same property will apply for it and its children. In other words, recursion.


The image above shows the insertion of the element '6' in the BST given. Note how we search for the correct point to insert at by moving left or right at each step depending on whether the inserted element is lesser or greater than the current element.

So look at the pseudocode below, carefully, and try to understand how the insertion procedure works.

insert_BST (x, root) :
    # input: We are given the node x to be inserted.
    if (root == NULL) :
        root = x
        return
    
    if x.value > root.value:
        insert_BST(x, root.right)
    else
        insert_BST(x, root.left)

Clear enough? Good. We can now proceed to searching.

Searching


The more astute among you would have figured out that we have actually already covered searching. Confused? Recall point 1 of the insertion steps: we require to find the point of insertion. Yes, that is precisely how search in a BST works: we just move downwards from the root following the BST rule till we end up at either a NULL element (the requested node does not exist) or at the required element. Searching is done as follows:

search_BST (x, root) :
    # input: x is the value we are looking for
    if (root == NULL) :
        print "The node requested does not exist"
        return
    if x == root.value:
        print "Value found!"
        return
    if x > root.value:
            print "Go right"
            search_BST(x, root.right)
        else
            print "Go left"
            search_BST(x, root.left)

Note that this is basically a modification of a submodule referred by the insertion step, the difference being that we go all the way to the leaf nodes during insertion, while we may terminate the process when the node is found prematurely before reaching a leaf node.

Those of you who have learnt binary search in sorted arrays will be seeing the similarity in the procedure already. Instead of starting from the middle element, we start from the root, and then it's basically the same steps.

Deletion


Sometimes, it is necessary to delete an element in a BST. Deletion in BSTs is the most complicated subtopic to understand, so pay attention.

Deletion involves some simple overall steps:

  1. Find the element to delete using a search.
  2. Assuming the element exists, it can be in one of three states:
    1. It may be a leaf node, i.e. have no children.
    2. It may have one child, either a left or a right child.
    3. It may have both a left and a right child.
  3. Depending on which case of the above the node belongs to, its deletion will follow three different methods.

The three cases of deletion are given below:

Case 1: Leaf Node

Deletion of leaf nodes is quite direct. All you do is to remove the reference to the node and deallocate the space held by it. As deletion of a leaf node affects no other node (as it has no children), this case is straightforward.

Case 2: One child

Nodes with one child can be deleted by crossing over the node in the link from its parent node. If the preceding sentence does not make sense to you, have a graphic.


In both the left and the right case shown in the image, the node colored red is to be deleted, with the green subtree as its only child. Notice that we just link past the node to be deleted: its parent now links to its child.

Case 3: Two Children

Okay, so here is the true issue. After looking at the first two cases, I think you might be beginning to see why a node with two children cannot be simply deleted. Firstly, it is not possible to directly delete the node, as its children have to now be relocated. Also, it is not possible to link past the node, as it has two children, so which child can you link its parent to?

Thus, the strategy for deleting a node with two children is quite different. This involves knowledge of something called the inorder set of the BST.

So what's the inorder set? It is basically exactly as it sounds: it is the sorted set of elements in the tree. If you remember the BST Property, for every node, the left child is less than the node and the right child is greater than it. From this, we can extend it to saying that all the elements in the left subtree are less than the node while all the elements in the right subtree are greater than the node. Again, extending this for a particular subtree, we can see that the least valued element in a subtree is the leftmost element, while the greatest valued element is the rightmost element of that subtree.

Following the above discussion, we can see that the inorder set will contain elements in the order as they are visible from left to right.


If you follow the little dude's line of sight as he looks directly upwards, and then move the dude towards the right, you end up with the inorder set of the given BST. For the BST above it is D, B, E, A, F, C, G, with the elements on the left less than the elements on the right (Check it!).

Now assume all the elements were placed on a number line. In a number line, lesser elements appear on the left. Hence, the number line would contain the same order: D-B-E-A-F-C-G. Assume we had to delete an element from this number line. Now when we do that, the element has to be replaced by another element already present in the number line. This element has to be chosen such that the replacement does not violate the number line property, namely the elements from left to right must stay in order. Thus, we can choose either the element to the immediate left, or the element to the immediate right of the element to be deleted, and then replace the position.

Similarly, in a BST, to delete the element which has two children, we must replace it by another node that appears either RIGHT BEFORE or RIGHT AFTER it in the inorder set. In other words, this element is the greatest element before the node or the smallest element after the node

Recall that all elements before (less than) the node lie in the left-child-subtree of the node, while all elements after (greater than) the node lie in the right-child-subtree of the node. Also recall that the smallest element of a subtree is the leftmost node of that subtree, while the largest is the rightmost.

Thus, we can see that the element with which we replace a node X is the rightmost node of X's left child, or the leftmost node of X's right child. The former is known as the inorder predecessor while the latter is called the inorder successor of the node X.

Thus, we can choose to replace the node by either its inorder predecessor or successor. Let us, for the purposes of this discussion, take the inorder predecessor.

Now, the inorder predecessor of a node is the rightmost node of its left subtree. Following that logic, the predecessor CANNOT have a right child (as then it will not be the rightmost node). Thus, the only cases are that the inorder predecessor can be a leaf node or a one-child node (with a left child). Therefore, deletion proceeds according to the following graphic:


In the above graphic, we are trying to delete the red node, which has two children, as shown. First we locate the inorder predecessor by finding the rightmost node of the left subtree. It is denoted by the blue dot in the corner of the left subtree. Then we copy its value to the node to be deleted (the 'Replace' step). Then we delete the original predecessor node (as there are two copies of it now, as shown), which will follow one of the first two cases, depending on whether it has no children or a left child.

The complete deletion algorithm is as given below.

del_BST (x, root) :
    # input: We assume that x is the node to be deleted, already found using search
    # Also, nodes have a parent attribute

    xpar := x.parent
        
    # Case 1:

    if (x.left is NULL and x.right is NULL) :
        if xpar is NULL :
            # We are trying to delete the tree root
            root = NULL
            del x
            return
        if x == xpar.left
            xpar.left = NULL
            del x
            return
        else
            xpar.right = NULL
            del x
            return

    # Case 3:

    else if (x.left is not NULL and x.right is not NULL) :
        
        inordPred := rightmost(x.left)
        x.value = inordPred.value
        
        del_BST (inordPred, root)
        return
        
    # Case 2:
        
        else:
            if xpar is NULL:
                # We are trying to delete the tree root
                if (x.left is not NULL):
                    root = x.left
                    root.parent = NULL
                    del x
                    return
                else :
                    root = x.right
                    root.parent = NULL
                    del x
                    return
            
            else:
                # x.parent is not NULL
                if (x.left is not NULL) :
                    if (x == xpar.left) :
                        xpar.left = x.left
                        x.left.parent = xpar
                        del x
                    else:
                        xpar.right = x.left
                        x.left.parent = xpar
                        del x
                else:
                    if (x == xpar.left) :
                        xpar.left = x.right
                        x.right.parent = xpar
                        del x
                    else:
                        xpar.right = x.right
                        x.right.parent = xpar
                        del x
            return

Take a moment to understand the pseudocode before trying to implement it on your own in a program. The three cases are delineated clearly. In the above algorithm, we assume each node also has a reference to its parent, for ease of manipulation. In practice, this is not necessary, and the node's parent can be found in the search step itself.

Notes


So as I had said earlier, the main strength of a Binary Search Tree appears when we have to search for an element, as it potentially reduces the amount of elements we have to go through by half. However, consider the following order of insertion:

1, 2, 4, 6, 7, 8

In the above order, there is something peculiar. Yes, all the elements are already in order. Hence when '1' is inserted, it becomes the root; when '2' is inserted, it goes to its right; when '4' is inserted, it goes to 2's right; and this process continues till we get this binary search tree:


Notice something weird? It's not much of a tree, is it? Notice that it looks exactly like a linked list. If we search for an element in the above tree, say '6', then we see that it does not eliminate any subtrees at any node. It requires us to check 4 nodes to reach node '6', and 6 nodes to reach nodes '8'. In other words, a poorly balanced tree like the one above, while satisfying the BST Property, still requires O(n) time to search for any element.

To combat this lack of balance, we use concepts like AVL Trees and Red-Black Trees. But more on those, another time.

Sunday 30 November 2014

Concept 2: The Maximum Subarray Problem

Consider an array A, of numbers. What you want to do is to find two points (elements) in the array such that the sum of all the elements between those points (including the two endpoints) is the maximum possible sum of any such subarray. This is termed the maximum subarray problem.

In other words, what you want is to find the continuous section of the array with the maximum total sum. Of course, this means that the array contains both positive and negative elements (Otherwise, if the array contains only positive elements, the maximum subarray is nothing but the entire array: think about that for a second). So you want to minimize the 'losses', which are the negative elements, while maximizing the 'gains', the positive ones.

This algorithm follows the famous method of "divide and conquer". The overall method is exactly as it sounds: the divide-and-conquer method of algorithms works by dividing the main problem into subproblems of manageable size, which we can deal with directly. These subproblems are nothing but smaller instances of the main problem - with the special cases among them being handled separately. Naturally, this kind of a problem lends itself to a recursive solution.

In recursion, you must remember that each instance of the recursive call behaves in the exact same manner as that of its parent, and this holds for all cases except the edge (terminating) cases. For example, for the Fibonacci sum of a number:

F(n) = F(n-1) + F(n-2)

Now we see that the above condition holds for all values of n: all except n=1 and n=0, which are the edge cases. For all other cases, replacing 'n' with any value will result in a correct relation.

Now that we are done with that little diversion into the basics of recursion, we can proceed with the maximum subarray problem.

Algorithm Basics


The maximum subarray problem can be logically divided into three steps:

  1. The maximum subarray of A (between A.begin and A.end) can either be:
    1. The maximum subarray between A.begin and A.mid.
    2. The maximum subarray between A.mid + 1 and A.end.
    3. The maximum subarray crossing A.mid, with part of it in the first half and part of it in the second half.
  2. The subproblems entirely in the left half (step 1.1) and entirely in the right half (step 1.2) are nothing but direct subproblems of the original problem.
  3. The subproblem crossing A.mid (step 1.3) is a special case and needs further inspection.


In other words, we acknowledge that there can be only one maximum subarray (if there are two with the same sum, we take any one, depending on the algorithm). This one maximum subarray can lie in three positions: entirely in the first half, entirely in the second half, and partially in the first half and second half. We are also aware that the third case must cross the midpoint. In other words, as the subarray is continuous, the midpoint must be part of it in the third case.

Using this data, we can craft the entire algorithm.

Complete Algorithm


As we can see, the case for the subarray being entirely in the left or right half of the array warrants no inspection, as it is nothing but a copy of the original problem on size (A.size)/2. Thus, it will be handled by the recursive call.

What we need to worry about is the third case: the one where the maximum subarray may belong to the subset of all subarrays crossing the midpoint. In this case, we can handle it thus:

  • Now, we are aware that any subarray crossing the midpoint must include the midpoint itself. What we don't know is what the endpoints are.
  • As we need the maximum subarray, and using the fact that addition is commutative, we can actually start from the midpoint and add in either direction to obtain both endpoints.
  • So start from the midpoint, and add down to the first element (the 'low' index) and maintain a running tally of the obtained sum and the index of that endpoint of the maximum obtained sum.
  • Again, for the other end, start from the midpoint (actually midpoint + 1, as we have counted the midpoint already for the previous point), add up to the last element (the 'high' index of the array) and maintain a running tally of the sum and the index of the endpoint of the maximum obtained sum.


What we have now obtained are two values: the maximum sum in the region before the midpoint (and including it) and the maximum sum in the region after the midpoint (and including midpoint+1). The total sum crossing the midpoint is nothing but the sum of the above two values. This can be made clearer using the following pseudocode:

max_crossing_subarray (A, lo, mid, hi):
 
 tSum := 0
 lSum := -inf # inf = infinity
 loIndex := mid
 
 for i := mid downto lo :
  tSum += i
  if tSum > lSum:
   lSum = tSum
   loIndex = i
 
 tSum = 0
 rSum := -inf # inf = infinity
 hiIndex := mid+1
 
 for i:= mid+1 to hi:
  tSum += i
  if tSum > rSum:
   rSum = tSum
   hiIndex = i
 
 return (loIndex, hiIndex, lSum+rSum)


From the pseudocode, it must be clear how the procedure for finding the subarray that crosses the midpoint must proceed. From here, it is a simple matter of returning the maximum value among the left half, right half and middle section as the maximum possible subarray.

max_subarray (A, lo, hi) :
 
 if lo == hi : # only one element present
  return (lo, hi, A[lo])
 
 # else, more than one element
 
 mid = (lo+hi) / 2 # remember integer division floors the value
 
 # initialize all limits and maximum sums
 (lLo, lHi, lSum) := max_subarray (A, lo, mid)
 (rLo, rHi, rSum) := max_subarray (A, mid+1, hi)
 (mLo, mHi, mSum) := max_crossing_subarray (A, lo, mid, hi)
 
 elMax := max (lSum, rSum, mSum) # maximum function to get max value of 3 parameters
 
 # return matching tuple based on maximum region sum
 if elMax == lSum : return (lLo, lHi, lSum)
 else if elMax == rSum : return (rLo, rHi, rSum)
 else return  (mLo, mHi, mSum)

Time Complexity


The time complexity for the algorithm can be computed with the master theorem for recurrences. Now I know that I've not written anything about the master theorem yet, so for now go open up that Wikipedia link, or just Google it or something.

We use the master method as follows:
  • The main problem can be specified as some T(n), which is a function operating on 'n' inputs (for this problem, 'n' is the size of the array).
  • Now there are two recursive calls in each main function call: the function max_subarray() is called on the first half and second half of the array. Each of these calls can be said as T(n/2), ignoring the constant difference of 1 element when n is odd.
  • The main function also calls max_crossing_subarray() once. This call takes O(n) time as each element of the array is checked once.
  • Thus the total running time can be written as T(n) = 2T(n/2) + f(n), where function f(n) takes O(n) time.
  • By the master theorem, this takes the form of the second case (as given in that Wikipedia link, go look it up), and we end up with the running time of O(n*log(n)).

Usage


This algorithm is useful for a lot of tasks. Perhaps the most important is buying / selling stocks. You want to buy stock on the day it is cheapest, while selling it in such a manner that you obtain the maximum profit. If you have a mechanism in place that can predict stock price reliably, you can use this algorithm to find the days to buy and the days to sell. The inputs for the array in that case will be the difference in price from one day to the next. This difference is positive for a day X when the selling rate on that day is higher than the previous day, and vice-versa. Of course, chances are that if you can predict the stock prices to that extent, you'd be either rich, dead, or both.

EDIT: Added the Time Complexity section which I had quite stupidly forgotten. Thanks +Shreyas shankar srinivas! Also, there is a way to do this algorithm in linear (O(n)) time, but I haven't figured out how yet. When I do, I will put it up here.

Friday 28 November 2014

Concept 1: Rotations in Binary Search Trees

Now everybody knows good ole Binary Search Trees, right? No? Okay, for those of you who are new to that, I will probably write about it in another post soon (EDIT: That post is now UP!). In the meantime, you may get your arse out of my blog and go Google it or something. For the others who have already been introduced to that concept, I bring you another, simple, quite innocuous little topic: rotations.

Now rotations are used to shift two nodes in a BST relative to each other, while maintaining the Binary Search Tree Property: the left child of every node is smaller than it, while the right child is larger than it.

In other words, a node and its subtrees in a BST look something like this:


Here, 'L' is used to refer to the node at the root of the left subtree, and 'R' as the root of the right subtree. Note that L<P<R for all nodes P.

The whole point of rotations is to take two nodes (a parent and any one of its children) and change their relative vertical positions without changing their horizontal ones. In other words, you want to maintain the L<P<R property, but you want to change the parent from the current node to the other node (which is its immediate child: rotations only work on adjacent nodes).

In pictures, what you want to do is this:


Now look at the little dude at the bottom. If he looks upwards, what he sees is the nodes in the order c A d B e. This is basically the left-to-right order (the inorder traversal) of the subtree, with c<A<d<B<e (check it!). Notice how we managed to make B the parent and A the child without disturbing this left-to-right order.

Rotation works both ways, as shown in the above image. Converting to the left side is called the right rotate (as the child B is on the RIGHT) and converting to the right side is called left rotate.

Frankly, I think the word "rotation" is a misnomer. It looks nothing like a rotation! IMO, it looks more like the node B climbs vertically up while node A climbs down (look at the image: they are in the same horizontal position). But to each man his own.

Anyway, let's get down to the nitty-gritty. Here are the steps you have to follow to rotate:

Left Rotation

First, I'd like all of you to go through the below algorithm carefully and draw it, or if you can, try to visualize it. Drawing is better, less room for error.

  1. Initially, A is the parent and B is its RIGHT CHILD.
  2. Create a temporary node 'save' and give it A's value
  3. Set save's left to parent.left
  4. Set save's right to parent.right.left
  5. Copy B's value to parent
  6. Set parent.right to parent.right.right
  7. Set parent.left to save
  8. And you're done
Note that all we are doing is constructing the new parent's left child, which is the A-subtree as seen in the right side of the diagram. Then we modify the children of the parent to the new children relative to their original positions (for example, the new parent.right is now parent.right.right).

Right Rotation

Right rotation can be done with some simple steps:

  1. Go to the left rotate section and copy the algorithm text
  2. Paste it somewhere, notepad or something
  3. Find all instances of 'left' and change them to 'right'
  4. Find all original instances of 'right' and change them to 'left'
  5. Read and understand
You didn't honestly expect me to do all the work for you, did you? You lazy ass! It's the same concept!

Usage

Hopefully, by now, you've got the essence of rotation. If not, go back up there and learn it.

Done? Good. Now rotation can be employed in several different uses. Most notable, however, is self-balancing binary search trees. With a simple rotation, you can shift the entire balance of the subtree from a left-bias to a right-bias, and vice versa. This technique will be employed in AVL Trees and Red-Black Trees, both of which I will be writing about ASAP.

Until then, keep learning
Ciao!

EDIT: Wikipedia has this brilliant image for rotation in BSTs.


By the way, for those who want to know, the first two images were created by me using this Chrome app called Gliffy. It's really, really handy. For all those using Chrome, I highly recommend it.

Thursday 27 November 2014

A Resolution

So I've been doing some thinking...

My 19th birthday arrived on the 16th of this month. I didn't really celebrate or anything, for good reasons. My 3rd semester examinations were coming up on the 19th of the same month (there were 3 days to go) and so I was kinda trying to catch up on all the stuff I had yet to study, which was pretty much all of it. So the day went by (I didn't really manage much, but meh) and I did nothing birthday-ish. But I got thinking, I've got a shit-ton of stuff left to study. Plus, the whole semester, I had done pretty much nothing of which I had thought I would learn, so all in all it was not much of a productive semester.

And I intended to change that. So, from this day on (my exams ended yesterday), I propose a resolution, so hear me out:

Every single day, I am going to learn ONE new concept relating to Computer Science.

That's it. Every day, one concept. Sounds simple? Now according to me, one concept is whatever I feel like. It's gonna be something modular, something complete in itself. For example, I could learn Insertion in Red-Black Trees as a concept. I could learn Fundamentals of Dynamic Programming as a concept. I could learn something large and complex, or I could learn something small and simple. The only thing is, whatever I learn, it will satisfy my need to learn for that day, and it will allow me to sleep with a sense of accomplishment :D

Also, a small add-on to that resolution:

Every thing I learn, I am gonna blog right here.

Yep, you heard that right. I am finally gonna stick to the title of this blog, and try to act like a Programmer In Training, so Imma go out there, and punch some keys on ma keyboard (which is KICK-ASS, by the way; I'd recommend it to all those living in India), and Imma get ma lazy ass to get shit done, 'cuz this blog ain't updated in a long-ass time.

First few posts are gonna come up today containing all the good stuff I've learnt this semester. So God help me if there are less than 365 posts here by the end of one year, because I'm gonna do what it takes to start sharing again. I realize that I am kinda dormant when it comes to sharing things, and I see why that should not be the case. After all, the more I share, the more I gain.

PS: Because my interests in computers, algorithms and programming are varied, and because I get bored as shit easily, posts are NOT necessarily gonna be sequential. However, I will try to make your life easier by tagging posts and appropriately naming titles so that you can use that little search bar to the fullest and find what you are looking for.

Aaand PEACE!

Saturday 8 March 2014

Cognitive Systems

Quite recently (yesterday, according to the time of writing this), I attended a rather interesting lecture on Cognitive Systems and Computing, by this dude named Ravi Seshadri from IBM. He spoke about the use of cognitive computing, or "thinking" computers, in analysing big data patterns and providing solutions in unstructured languages, like English. He went on a bit about Watson and its applications in computing, optimizing transportation systems, etc.

Now cognitive systems can allow us to interact with computers in natural languages, like English. Soon, you could probably tell your computer to make your coffee extra strong, and it would send the correct set of instructions to the coffee-maker to electronically brew your coffee just the way you like it. You could call out to your computer to book your air tickets, to wash your laundry, and so on and so forth.

Or can you?

Let's back this up a bit and see how we make the computer follow our instructions now. We see that we have to write our instructions to the computer in a painstaking, alien-like language that just barely resembles English. We give the label of 'high-level language' to those constructs and we call the people who can 'speak' this language as programmers. But there's a fundamental difference between this and our 'normal', unstructured languages: it's just that - they are structured. This means that we can give exact, precise instructions to the computer and the computer shall follow those orders to the letter - no more, no less, the ideal slave.

Now what happens when you introduce unstructured languages into the mix and develop on cognitive computing? The computer shall end up with the same fallibilities as that of a human. Develop the computer's database and information-acquiring algorithms enough, and what we've got on our hands is essentially another human, albeit one who can alculate really, really fast. Now this 'intelligent' computer will make the same decision errors as we do: when we can't decide what we want to eat for breakfast, how will the computer?

So the question is this: will using unstructured programming and natural language processing compromise on efficiency?

PS: I'm pretty sure that this entire post has been rather vague and hard-to-follow. But... that's the point. You see, I'm human - nowhere exact or precise ;)

First post (thought of trying a fancy title, but meh)

Right... so where do I begin...

Hi, I am Siddarth R, and this is the first post on this blog that I created several years ago. As I recall, there were some rather inane posts on this page which I promptly deleted recently when it was brought to my attention that I actually did have a blog (I had forgotten all about this page). I'm blogging for no real reason, mostly pure boredom and the fact that I want to speed up my typing. I recently bought this AMAZING keyboard from flipkart.com. It's called the TVS e-Gold Bharat. Some of you might recognize it as the keyboard Jesse Eisenberg (playing Mark Zuckerberg) in the movie 'The Social Network' uses during the scene where he creates FaceMash (I'm not entirely sure about this information, but it was told to me by a friend who is more or less reliable with these fact tidbits). Anyway, I'm just trying to get some practice on this keyboard at this moment, so that's all the reason I need for these posts.

Right, so this blog is going to contain anything and everything that goes through my mind at the time of writing, but not necessarily at the time of posting, though I will try to keep those times close. I am a student of the National Institute of Technology, Tiruchirapalli, India (quite a mouthful, innit? Call it NIT Trichy for short (that's TRICH-ee for those of you who need pronunciation guides (PS: I will do a lot of bracket nesting like this.))) pursuing a B-Tech Computer Science and Engineering degree. This blog is more or less a way to express my thoughts, my opinions and my experience throughout the course of the next four years (I imagine there will be a lot of moments of utter and absolute boredom, in which cases I will turn to writing this blog.). As I will be making hardly any edits to this writing (other than spelling and basic grammar as I type), don't expect some stellar reading material here. I'll try to keep my posts engaging, but as I am writing this as I go, with no prior preparation or further edits, what you are reading is pretty much the current state of my mind.

A word of caution: pertaining to the previous sentence, there will be moments when I am writing this blog while I am quite frustrated or angry, in other words, when I am pissed off. At those moments, slight NSFW language may just creep in. Therefore, please go elsewhere if you are looking for a blog that mimics the vocabulary of a kindergarten classroom, or if you are less than 11 years old. That said, if you just have a problem with how people express themselves on the Internet, and believe that free speech should be stifled in keeping with child-friendly protocols or some other bullshit, then fuck you.

Having said all that, I welcome criticism and debate on anything and everything I post. There will be opinions raised, there will be personal views and reviews, there will be abstract concepts that I don't know much about (but which I am willing to learn). The more the activity, the greater the opportunity for learning, or so I have always believed. So help me to learn and challenge whatever I post on this blog. I would particularly enjoy it if you could correct any spelling or grammatical miskates I has made. Above all, the ability to communicate effectively is important, and I believe all of us can try to take that a couple of steps forward.

Thanks for reading.

Posted:
8 March 2014, 3:45 pm IST