Powered By Blogger

My Blog

One new concept a day (at the least). One new thing to explore in the exciting world of Computer Science each day. That's the Programmer in Training promise :P

Code and learn, boys. Code and learn.

Search This Blog

Sunday 30 November 2014

Concept 2: The Maximum Subarray Problem

Consider an array A, of numbers. What you want to do is to find two points (elements) in the array such that the sum of all the elements between those points (including the two endpoints) is the maximum possible sum of any such subarray. This is termed the maximum subarray problem.

In other words, what you want is to find the continuous section of the array with the maximum total sum. Of course, this means that the array contains both positive and negative elements (Otherwise, if the array contains only positive elements, the maximum subarray is nothing but the entire array: think about that for a second). So you want to minimize the 'losses', which are the negative elements, while maximizing the 'gains', the positive ones.

This algorithm follows the famous method of "divide and conquer". The overall method is exactly as it sounds: the divide-and-conquer method of algorithms works by dividing the main problem into subproblems of manageable size, which we can deal with directly. These subproblems are nothing but smaller instances of the main problem - with the special cases among them being handled separately. Naturally, this kind of a problem lends itself to a recursive solution.

In recursion, you must remember that each instance of the recursive call behaves in the exact same manner as that of its parent, and this holds for all cases except the edge (terminating) cases. For example, for the Fibonacci sum of a number:

F(n) = F(n-1) + F(n-2)

Now we see that the above condition holds for all values of n: all except n=1 and n=0, which are the edge cases. For all other cases, replacing 'n' with any value will result in a correct relation.

Now that we are done with that little diversion into the basics of recursion, we can proceed with the maximum subarray problem.

Algorithm Basics


The maximum subarray problem can be logically divided into three steps:

  1. The maximum subarray of A (between A.begin and A.end) can either be:
    1. The maximum subarray between A.begin and A.mid.
    2. The maximum subarray between A.mid + 1 and A.end.
    3. The maximum subarray crossing A.mid, with part of it in the first half and part of it in the second half.
  2. The subproblems entirely in the left half (step 1.1) and entirely in the right half (step 1.2) are nothing but direct subproblems of the original problem.
  3. The subproblem crossing A.mid (step 1.3) is a special case and needs further inspection.


In other words, we acknowledge that there can be only one maximum subarray (if there are two with the same sum, we take any one, depending on the algorithm). This one maximum subarray can lie in three positions: entirely in the first half, entirely in the second half, and partially in the first half and second half. We are also aware that the third case must cross the midpoint. In other words, as the subarray is continuous, the midpoint must be part of it in the third case.

Using this data, we can craft the entire algorithm.

Complete Algorithm


As we can see, the case for the subarray being entirely in the left or right half of the array warrants no inspection, as it is nothing but a copy of the original problem on size (A.size)/2. Thus, it will be handled by the recursive call.

What we need to worry about is the third case: the one where the maximum subarray may belong to the subset of all subarrays crossing the midpoint. In this case, we can handle it thus:

  • Now, we are aware that any subarray crossing the midpoint must include the midpoint itself. What we don't know is what the endpoints are.
  • As we need the maximum subarray, and using the fact that addition is commutative, we can actually start from the midpoint and add in either direction to obtain both endpoints.
  • So start from the midpoint, and add down to the first element (the 'low' index) and maintain a running tally of the obtained sum and the index of that endpoint of the maximum obtained sum.
  • Again, for the other end, start from the midpoint (actually midpoint + 1, as we have counted the midpoint already for the previous point), add up to the last element (the 'high' index of the array) and maintain a running tally of the sum and the index of the endpoint of the maximum obtained sum.


What we have now obtained are two values: the maximum sum in the region before the midpoint (and including it) and the maximum sum in the region after the midpoint (and including midpoint+1). The total sum crossing the midpoint is nothing but the sum of the above two values. This can be made clearer using the following pseudocode:

max_crossing_subarray (A, lo, mid, hi):
 
 tSum := 0
 lSum := -inf # inf = infinity
 loIndex := mid
 
 for i := mid downto lo :
  tSum += i
  if tSum > lSum:
   lSum = tSum
   loIndex = i
 
 tSum = 0
 rSum := -inf # inf = infinity
 hiIndex := mid+1
 
 for i:= mid+1 to hi:
  tSum += i
  if tSum > rSum:
   rSum = tSum
   hiIndex = i
 
 return (loIndex, hiIndex, lSum+rSum)


From the pseudocode, it must be clear how the procedure for finding the subarray that crosses the midpoint must proceed. From here, it is a simple matter of returning the maximum value among the left half, right half and middle section as the maximum possible subarray.

max_subarray (A, lo, hi) :
 
 if lo == hi : # only one element present
  return (lo, hi, A[lo])
 
 # else, more than one element
 
 mid = (lo+hi) / 2 # remember integer division floors the value
 
 # initialize all limits and maximum sums
 (lLo, lHi, lSum) := max_subarray (A, lo, mid)
 (rLo, rHi, rSum) := max_subarray (A, mid+1, hi)
 (mLo, mHi, mSum) := max_crossing_subarray (A, lo, mid, hi)
 
 elMax := max (lSum, rSum, mSum) # maximum function to get max value of 3 parameters
 
 # return matching tuple based on maximum region sum
 if elMax == lSum : return (lLo, lHi, lSum)
 else if elMax == rSum : return (rLo, rHi, rSum)
 else return  (mLo, mHi, mSum)

Time Complexity


The time complexity for the algorithm can be computed with the master theorem for recurrences. Now I know that I've not written anything about the master theorem yet, so for now go open up that Wikipedia link, or just Google it or something.

We use the master method as follows:
  • The main problem can be specified as some T(n), which is a function operating on 'n' inputs (for this problem, 'n' is the size of the array).
  • Now there are two recursive calls in each main function call: the function max_subarray() is called on the first half and second half of the array. Each of these calls can be said as T(n/2), ignoring the constant difference of 1 element when n is odd.
  • The main function also calls max_crossing_subarray() once. This call takes O(n) time as each element of the array is checked once.
  • Thus the total running time can be written as T(n) = 2T(n/2) + f(n), where function f(n) takes O(n) time.
  • By the master theorem, this takes the form of the second case (as given in that Wikipedia link, go look it up), and we end up with the running time of O(n*log(n)).

Usage


This algorithm is useful for a lot of tasks. Perhaps the most important is buying / selling stocks. You want to buy stock on the day it is cheapest, while selling it in such a manner that you obtain the maximum profit. If you have a mechanism in place that can predict stock price reliably, you can use this algorithm to find the days to buy and the days to sell. The inputs for the array in that case will be the difference in price from one day to the next. This difference is positive for a day X when the selling rate on that day is higher than the previous day, and vice-versa. Of course, chances are that if you can predict the stock prices to that extent, you'd be either rich, dead, or both.

EDIT: Added the Time Complexity section which I had quite stupidly forgotten. Thanks +Shreyas shankar srinivas! Also, there is a way to do this algorithm in linear (O(n)) time, but I haven't figured out how yet. When I do, I will put it up here.

Friday 28 November 2014

Concept 1: Rotations in Binary Search Trees

Now everybody knows good ole Binary Search Trees, right? No? Okay, for those of you who are new to that, I will probably write about it in another post soon (EDIT: That post is now UP!). In the meantime, you may get your arse out of my blog and go Google it or something. For the others who have already been introduced to that concept, I bring you another, simple, quite innocuous little topic: rotations.

Now rotations are used to shift two nodes in a BST relative to each other, while maintaining the Binary Search Tree Property: the left child of every node is smaller than it, while the right child is larger than it.

In other words, a node and its subtrees in a BST look something like this:


Here, 'L' is used to refer to the node at the root of the left subtree, and 'R' as the root of the right subtree. Note that L<P<R for all nodes P.

The whole point of rotations is to take two nodes (a parent and any one of its children) and change their relative vertical positions without changing their horizontal ones. In other words, you want to maintain the L<P<R property, but you want to change the parent from the current node to the other node (which is its immediate child: rotations only work on adjacent nodes).

In pictures, what you want to do is this:


Now look at the little dude at the bottom. If he looks upwards, what he sees is the nodes in the order c A d B e. This is basically the left-to-right order (the inorder traversal) of the subtree, with c<A<d<B<e (check it!). Notice how we managed to make B the parent and A the child without disturbing this left-to-right order.

Rotation works both ways, as shown in the above image. Converting to the left side is called the right rotate (as the child B is on the RIGHT) and converting to the right side is called left rotate.

Frankly, I think the word "rotation" is a misnomer. It looks nothing like a rotation! IMO, it looks more like the node B climbs vertically up while node A climbs down (look at the image: they are in the same horizontal position). But to each man his own.

Anyway, let's get down to the nitty-gritty. Here are the steps you have to follow to rotate:

Left Rotation

First, I'd like all of you to go through the below algorithm carefully and draw it, or if you can, try to visualize it. Drawing is better, less room for error.

  1. Initially, A is the parent and B is its RIGHT CHILD.
  2. Create a temporary node 'save' and give it A's value
  3. Set save's left to parent.left
  4. Set save's right to parent.right.left
  5. Copy B's value to parent
  6. Set parent.right to parent.right.right
  7. Set parent.left to save
  8. And you're done
Note that all we are doing is constructing the new parent's left child, which is the A-subtree as seen in the right side of the diagram. Then we modify the children of the parent to the new children relative to their original positions (for example, the new parent.right is now parent.right.right).

Right Rotation

Right rotation can be done with some simple steps:

  1. Go to the left rotate section and copy the algorithm text
  2. Paste it somewhere, notepad or something
  3. Find all instances of 'left' and change them to 'right'
  4. Find all original instances of 'right' and change them to 'left'
  5. Read and understand
You didn't honestly expect me to do all the work for you, did you? You lazy ass! It's the same concept!

Usage

Hopefully, by now, you've got the essence of rotation. If not, go back up there and learn it.

Done? Good. Now rotation can be employed in several different uses. Most notable, however, is self-balancing binary search trees. With a simple rotation, you can shift the entire balance of the subtree from a left-bias to a right-bias, and vice versa. This technique will be employed in AVL Trees and Red-Black Trees, both of which I will be writing about ASAP.

Until then, keep learning
Ciao!

EDIT: Wikipedia has this brilliant image for rotation in BSTs.


By the way, for those who want to know, the first two images were created by me using this Chrome app called Gliffy. It's really, really handy. For all those using Chrome, I highly recommend it.

Thursday 27 November 2014

A Resolution

So I've been doing some thinking...

My 19th birthday arrived on the 16th of this month. I didn't really celebrate or anything, for good reasons. My 3rd semester examinations were coming up on the 19th of the same month (there were 3 days to go) and so I was kinda trying to catch up on all the stuff I had yet to study, which was pretty much all of it. So the day went by (I didn't really manage much, but meh) and I did nothing birthday-ish. But I got thinking, I've got a shit-ton of stuff left to study. Plus, the whole semester, I had done pretty much nothing of which I had thought I would learn, so all in all it was not much of a productive semester.

And I intended to change that. So, from this day on (my exams ended yesterday), I propose a resolution, so hear me out:

Every single day, I am going to learn ONE new concept relating to Computer Science.

That's it. Every day, one concept. Sounds simple? Now according to me, one concept is whatever I feel like. It's gonna be something modular, something complete in itself. For example, I could learn Insertion in Red-Black Trees as a concept. I could learn Fundamentals of Dynamic Programming as a concept. I could learn something large and complex, or I could learn something small and simple. The only thing is, whatever I learn, it will satisfy my need to learn for that day, and it will allow me to sleep with a sense of accomplishment :D

Also, a small add-on to that resolution:

Every thing I learn, I am gonna blog right here.

Yep, you heard that right. I am finally gonna stick to the title of this blog, and try to act like a Programmer In Training, so Imma go out there, and punch some keys on ma keyboard (which is KICK-ASS, by the way; I'd recommend it to all those living in India), and Imma get ma lazy ass to get shit done, 'cuz this blog ain't updated in a long-ass time.

First few posts are gonna come up today containing all the good stuff I've learnt this semester. So God help me if there are less than 365 posts here by the end of one year, because I'm gonna do what it takes to start sharing again. I realize that I am kinda dormant when it comes to sharing things, and I see why that should not be the case. After all, the more I share, the more I gain.

PS: Because my interests in computers, algorithms and programming are varied, and because I get bored as shit easily, posts are NOT necessarily gonna be sequential. However, I will try to make your life easier by tagging posts and appropriately naming titles so that you can use that little search bar to the fullest and find what you are looking for.

Aaand PEACE!