The N-Queens Problem Haskell

This is a real toughy! I have to tell you I’ve cheated a bit to get the solution of this one! (It were my first 2 hours of real Haskell programming, so be nice! :))

The problem

A queen in this problem can go infinite steps forwards, backwards and diagonally. We need to place N queens on a NxN board so that no queen can attack another queen. You know how they can get…

This is a problem that is very well suited to be solved with streams. Streams are available in a number of programming languages like Scheme or Haskell. As this was an excericse for my Haskell course I’m obviously going for Haskell here.

The Solution

First step: Data representation

We can represent a list of all possible positions as [[(Int, Int)]] where the first value would be the row and the second one the column. But there is a small trick possible here. We know that the array is indexed from 1 to N. Every column on our board will have exactly one queen (if not there are 2 or more queens on one row and they will fight). This means we can have the column for each queen defined in an implicit manner, namely the array position. So we will represent our queens this way: [[Int]].

Second step: What to filter..?

We will need to generate a list of all possible rowpositions for each column. For a 2×2 board this would be:

[[1,1],[1,2],[2,1],[2,2]]

But we can immediatly see that [2,2] and [1,1] are no valid solutions because queens can attack eachother. This means that every value in the list must be unique between 1 and N. How do we generate these permutations for the list [1..N] ?

generateSolutions' :: Int -> [[Int]]
generateSolutions' 0 = [[]]
generateSolutions' n = permutations [1..n]

Third step: How to filter?

We’ve already taken only the permutations, this means that we never have  a queen on the same row/column. So we only need to filter out the lists that contain queens that are on the same diagonal.

When are two queens on a diagonal?

This happens when the distance between two columns and two rows is the same. You can draw this on a paper if you like. In the image below you can see that the distance between the rows of A and B is 1. The distance between the columns of A and B is 3.

To determine wether these queens are on a diagonal or not we can do just that:

sameDiagonal (a,b) (x,y) = abs (a - x) == abs (b - y)

But this means we we need to pass couples into the function, which we don’t have. We only have rowindices. So what we can do here is pass the first queen as a paramter and zip the rest of the list with [1..]. This gives us the column distance right away! So conceptually speaking we would get a list [(columnDistance, rowIndex)] for all queens except the head.

duplicateDiagonals q qs = any
                          (\(columnDistance, row) -> abs (row - q) == columnDistance)
                          $ zip [1..] qs

This will first of all zip with the infinite list, and then we will apply the predicate function to each element of that list. It just check wether the row distance between both queens is equal to the columndistance. This is pretty neat, isn’t it?
queens

Putting it all together

Now we have all the pieces we need to filter our list.

generateSolutions' :: Int -> [[Int]]
generateSolutions' 0 = [[]]
generateSolutions' n = permutations [1..n]

duplicateDiagonals :: Int -> [Int] -> Bool
duplicateDiagonals q qs = any (\(columnDistance, row) -> abs (row - q) == columnDistance) $ zip [1..] qs

test :: [Int] -> Bool
test [] = True
test (q:qs) = not (duplicateDiagonals q qs) &&test qs

Thanks for reading, Christophe

Advertisements

“Late self binding” and “static super binding”

Today in my course of object oriented programming languages I ran into “late self binding”.

First of all we all know that when a class “Foo” is the super class for a class “Bar”, every method of Foo is callable from within the class “Bar”. This is the whole point of inheritance. So suppose we have the following:

public class A {
	public void printMsg1()
	{
		System.out.println("A>>msg1");
	}
}

public class B extends A {
	public void printMsg2()
	{
		System.out.println("B>>msg2");
		super.printMsg2();
	}
}

We know that when we have an instance of class A, and we call a.printMsg1() we know that we will get “A>>msg1” as output in the console. And if we call b.prinMsg2() we will get “B>>msg2” followed by “A>>msg1”. The second message comes from our class A, which is obviously the parent of our class B, i.e. the super class. Now let’s use a third class, C.

public class A {
    public void printMsg2()
    {
        System.out.println("A>>msg1");
    }
}
public class B extends A {
    public void printMsg1()
    {
        System.out.println("B>>msg2");
    }
}
public class C extends B {
    public void printMsg1()
    {
        System.out.println("C>>msg1");
        super.printMsg1();
    }
}

We can clearly see that our super class B does not have a method named printMsg1(), so what happens? The compiler will look at compile-time a level up in our inheritance tree and will find a method named like that in our class A and use that one. So the output of a call to an instance of class C it’s method c.printMsg1() will give us “C>>msg1” followed by “A>>msg1”. If we were to delete the printMsg1() procedure from class A and insert it in class B, everything would still work just fine because our call to super with find the method we are looking for in the class B, which it will check before class A, since it’s our direct parent.

This entire process is called static super binding. This means that at compile-time the compiler knows in which class it will have to look for a call to super.

Now comes the tricky part, late self binding.

public class A {
	public void printMsg1()
	{
		System.out.println("A>>msg1");
		this.printMsg2();
	}
	public void printMsg2()
	{
		System.out.println("A>>msg2");
	}
}

public class B extends A {
	public void printMsg1()
	{
		System.out.println("B>>msg1");
		super.printMsg1();
	}
	public void printMsg2()
	{
		System.out.println("B>>msg2");
	}
}

I don’t know about you guys, but I never fully grasped the usage of this.

On an instance of B, when we call b.printMsg1() we will get the output “B>>msg1” and then we call our super and ask it to execute printMsg1(). This will give us the output “A>>msg1”. Then we call this and ask it to execute printMsg2().

The actual this is the calling class instance. This is obviously our instance of B. So this does not point to the current enclosing class (A), but to the enclosing class that initially called the first method on the bottom of our call stack. In our case this is b.printMsg1(). If we look in our class B we will see that we indeed have a method named b.printMsg2(). So we get the output “B>>msg2”.

So what happens if we don’t have that method (b.printMsg1()) in our child class? Simple. If you consider the line this.printMsg2() as a call to our initial caller, the B class, you know that in case the method is not present we will look for it in the super class.

Why the name late self binding then? Well, this stems from the fact that at compile-time you don’t know which method of which class will be at the beginning of the call stack. So therefore the compiler cannot tell then which method from which class needs to be invoked.

The super can always be determined at compile-time. This is because a class has a unique super (with multiple inheritance this is no longer true) and can therefore be identified. The reason for this is that you can not change your super class at run-time. Say a class B extending class A can never be modified during run-time to extend class C instead of class A.

Hope this helps somebody,

Christophe

PS: I know that I’ve used some terminology that might be incorrect, but I do think the concept is clear and outlined correctly. If you have any remarks I’d be glad to hear them in the comments.

Pattern Matching Algorithms

Hi there!

I’ve implemented some pattern matching algorithms in C#. They were part of a course I took at the university I study at. They are therefore hardly optimized for real life usage. They do represent the conceptual idea of the algorithms.

The algorithms I implemented are Knuth-Morris-Pratt, Quicksearch and the brute force method.

Brute force method

The brute force method is quite simple. We align our pattern with the text and every time we have a mismatch we shift our pattern one step to the right. This is very bad performance wise. Worst case we will match every letter of our text with every letter of our pattern, which equals to 0(np*nt). Imagine this for a text of 1m characters and pattern of 100.000 characters. This method is good enough for very small text and patterns but as you’ll see, the complexity of other algorithms is hardly bigger. The code is quite straight forward and given below.

Code:


        public static int BruteForce(string text, string pattern)
        {
            var nt = text.Length;
            var np = pattern.Length;

            var it = 0;
            var ip = 0;

            while (it < (nt - np))             {                 if (ip > (np - 1))
                {
                    return it;
                }
                if (it > (nt - np))
                {
                    return -1;
                }
                if (text[it + ip] == pattern[ip])
                {
                    ip++;
                }
                else
                {
                    it++;
                    ip = 0;
                }
            }
            return -1;
        }

Knuth Morris Pratt

This algorithm is rather difficult to explain in a simple blog post. So bear with me as I try. The algorithm fully depends on the sigma-function. This function will return the largest prefix of the pattern, that is also a suffix of the part of the pattern that we have already matched. This is quite crucial to understand. Take a look at the image below.

KMP

As you can see we have mismatch at the character ‘x’ and ‘a’. Using the bruteforce method we would just shift our pattern one step to the right. So the first ‘a’ would align with the second ‘b’ of our text. What KMP will do, is calculate the sigmafunction. You can see in the image, a suffix of the last ‘a’ in our pattern is ‘ab’ for example. ‘aab’ is one too but you’ll get it in a minute. This ‘ab’ is also a prefix of our pattern. Meaning, if we shift our pattern just the length of that pre- and suffix, we don’t have to match those characters anymore. Because we know for certain that all the characters before our mismatch matched. So we will shift the pattern as you can see in the image. And this is the idea behind the sigma function. Once you got it, it’s a very simple algorithm.

The code:

        private static int[] SigmaTable(string pattern)
        {
            //We will calculate the longest prefix of 0->ip that is also a prefix of the pattern

            var np = pattern.Length;
            var sigmatable = new int[np];

            var ip = 2;
            var k = 0;

            while (true)
            {
                if (ip >= np)  //We have shifted out of our pattern, we are done.
                    break;
                if (pattern[k] == pattern[ip - 1])//Our prefix extends
                {
                    sigmatable[ip] = (k + 1);
                    ip++;
                    k++;
                    continue;
                }
                if (k > 0)//We have a mismatch, so we need to see if we have a smaller prefix that we can use
                          //We already calculated this, so just get it.
                {
                    k = sigmatable[ip];
                    continue;
                }
                //We have a mismatch, but K = 0, so we dont have a prefix.
                sigmatable[ip] = 0;
                ip++;
            }
            sigmatable[np - 1] = k;
            sigmatable[0] = -1;
            return sigmatable;
        }

The actual algorithm

 public static int Kmp(string text, string pattern)
        {
            var nt = text.Length;
            var np = pattern.Length;
            var sigmatable = SigmaTable(pattern);

            var it = 0;
            var ip = 0;

            //We will keep looping, untill
            //-->We have found a match (ip > (np -1)
            //--> We have shifted out of our text, not found (it > (nt -np))
            //We have match, step forward in the pattern!
            //We have a mismatch, jump back as much as the sigmatable tells us.

            while (true)
            {
                if (ip > (np - 1))
                {
                    return it;
                }
                if (it > (nt - np))
                {
                    return -1;
                }
                if (text[it + ip] == pattern[ip])
                {
                    ip++;
                    continue;
                }
                it = (it + (ip - sigmatable[ip]));
                ip = ip > 0 ? sigmatable[ip] : 0;
            }
        }

Quick Search

This is my personal favorite for two reasons: 1) it’s fast and 2) it’s everything but difficult!

This algorithm too, will do some preprocessing, although not as difficult to explain as KMP. What we will do is, for every character in the pattern, store the left most location in the pattern. E.g “abcabcd” will have 1 for a, 2 for b, 3 for c and 7 for d. Now, when we have a mismatch, what we will do is take the first character that comes after the entire pattern in the text. So in our picture above, we would take ‘y’. We then check to see if this character exists in our pattern. If it does not, we don’t even have to try matching, we skip it entirely, i.e shift the beginning of our pattern past that character (see the performance?). If it does exist, we shift our pattern the value of our table to the right and start matching from scratch.
We don’t know if our first characters will match, but we do know there is going to match one. The performance boost this algorithm gets is from the parts where we can skip characters that don’t exist in our pattern.

The code:

        private static ShiftTable ComputeShiftTable(string pattern)
        {
            //We will just calculate the position of every character in the pattern. We then store the most right location.

            int[] shift_table;

            int minAscii = pattern[0];
            int maxAscii = minAscii;

            var idx = 0;
            //First we calculate the lowest and highest ascii value to create our array
            while (idx < pattern.Length)
            {
                minAscii = Math.Min(minAscii, pattern[idx]);
                maxAscii = Math.Max(maxAscii, pattern[idx]);
                idx++;
            }

            shift_table = new int[maxAscii - minAscii + 1];

            //Loop over the table and store our patternlength in it. (If we dont find the character, we have to shift the entire pattern!).
            for (int i = 0; i < shift_table.Length; i++)
                shift_table[i] = pattern.Length + 1;

            //Loop over the pattern again. For every char we update it's position in the array.
            //This way, we always have the leftmost character in the array.
            idx = 0;
            while (idx < pattern.Length)
            {                 
                 int currentascii = pattern[idx];                 
                 shift_table[currentascii - minAscii] = pattern.Length - idx;                 
                 idx++;             
            }             
            return new ShiftTable(shift_table, maxAscii, minAscii, pattern.Length);         
} 

The algorithm

         
public static int QuickSearch(string text, string pattern)         {             
       var nt = text.Length;             
       var np = pattern.Length;             
       var shift = ComputeShiftTable(pattern);             
       var it = 0;            
       var ip = 0;             
       while (true)             {                 
                if (ip > (np - 1))//We shifted out of our pattern, found!
                {
                    return it;
                }
                if (it > (nt - np))//We reached the end of our text with our pattern, not found!
                {
                    return -1;
                }
                if (text[it + ip] == pattern[ip])//We have a match, continue.
                {
                    ip++;
                    continue;
                }
                //We didnt have match, get the first character next to our pattern, and calculate the shift back.
                var ct = text[(it + np)%nt];

                it = (it + shift.Shift(ct));
                ip = 0;
            }
        }

That’s it!

Christophe,

Mounting Windows share in Ubuntu server terminal

Hi,

I’ve been trying to properly mount a windows share using Linux today. I’ve got it figured out using bits and pieces here and there, so I’m sharing!

The server at 192.168.1.123 has a share named “shareFolder”. We have login credentials for the server, being “admin” and “password”.

First of all, we need a directory to mount the drive to. For example, I created a directory “Windows” in my home folder.

mkdir ~/Windows

When you have the mountpoint, you might also need to install cifs-utils:

sudo apt-get install cifs-utils

Now we’re set to mount.

To simply do a one-time mount of a network drive we type in the following command:

sudo mount -t cifs -o username=admin,password=password //192.168.1.123/shareFolder /home/Windows

*”-t” specifies the mount type, being CIFS. “-o” means “options”, which we append by comma separated values. If e.g you leave out the password parameter it will prompt you for your password.

If you now navigate to that directory and perform an “ls” you will see the directories and files.

To mount a drive automatically when you log in to your server we need to edit the “/etc/fstab” file.

sudo vi /etc/fstab

This has a lot of stuff in it, but you can disregard that. We need to append one line at the end being the following:

//192.168.1.123/shareFolder /home/Windows cifs user=admin,password=password,uid=1000 0 0

If you have a shared folder with a space in the name, you replace the white space with 40. I have a folder named “Seagate 2TB” so I have the following line:

//192.168.1.123/Seagate\0402TB /home/Windows cifs user=admin,password=password,uid=1000 0 0

The “uid=1000” is my ID on my client ubuntu machine. You can find this out by simply typing “id” in an empty terminal. This makes sure I am the owner of the share. Also, you can use single quotes for a one-time mount, but you can not use them in /etc/fstab!

To make things work you’ll need to reboot. To see if it works you can simply list the contents of the directory.

cd ~/Windows
ls

And that’s how I got it working. I hope it can be of service for anyone 🙂

Christophe,

Cyborg RAT 7 and Ubuntu 13.04

I’ve recently bought a mouse like this and it doesn’t seem to work properly with Ubuntu! The x-server freezes after 1 click. So I thought I’d share this quick fix 🙂

You need to locate the following folder in your system:

/usr/share/X11/xorg.conf.d

Open a terminal (ctrl + alt + T) and enter “cd /user/share/X11/xorg.conf.d”

In this folder there should be a few configuration files, but disregard those. You will need to create your own configuration file for the RAT7 mouse as follows:

1) Create the file and open it with an editor to your choice. In my case VI. The number 910 is the priority of the configuration file. You might want to put that as the highest number to be sure 🙂

sudo vi 910-rat.conf

You can also use nano “sudo nano 910-rat.conf”

2) Paste the following lines in there

Section "InputClass"
Identifier "R.A.T."
MatchProduct "R.A.T.7|R.A.T.9"
MatchDevicePath "/dev/input/event*"
Option "Buttons" "17"
Option "ButtonMapping" "1 2 3 4 5 0 0 8 9 7 6 12 0 0 0 16 17"
Option "AutoReleaseButtons" "13 14 15"
Option "ZAxisMapping" "4 5 6 7"
EndSection

3) Reboot

sudo reboot

This should fix the problem for you 🙂 Mind that the configuration file is in /usr/ and not in /etc/! This makes the x-server stop from starting.

Christophe,

Basic Sorting Algorithms

Hi,

I’m studying for my master’s degree at the moment and in my spare time I rebuilt some of the basic sorting algorithms I learned. There are many more, but I wrote a few of them to gain some insight and because it’s fun :). The algorithms are far from optimized for real-life usage, but the mechanics are correct.

I built the following algorithms:

  • Bubble sort
  • Insertion sort
  • Merge sort
  • Quicksort
  • Quicksort with median 3
  • Randomized Quicksort

All the algorithms are built in such a way that they can sort any object type. To make this possible I used delegates to pass to the algorithms so they can extract the properties of the object that you need to sort on. To show how it works I’ve added a testing solution. It’s a simple console application that sorts arrays of given size and times how long it takes the algorithm to finish.

The source code can be found on Github, or in the downloads (top navigation bar).

If you have any questions, comments are open! 🙂

,Christophe