25 Mart 2015 Çarşamba

Solving N-queens problem using Google or-tools

Hello reader,

In this post, I am going to explain the code I wrote for solving the "N-queens Problem" using the "Google or-tools".

Google Optimization Tools is a software suit which makes it easy to solve many types of combinatorial optimization problems.

N-queens problem asks this question: "How can N queens be placed on an NxN chessboard so that no two of them attack each other?". Here is a solution for 4 queens:

No two queens are on the same row, column, or diagonal.
The N-queens problem is ideally suited to constraint programming. We'll walk through a simple Python program that uses or-tools to solve N-queens for any value of N.

We are going to:

  1. Create the solver,
  2. Define our variables,
  3. Add constraints,
  4. Create the search tree,
  5. Iterate through the solutions.

For creating our solver, we need the "pywrapcp" module from or-tools:

from ortools.constraint_solver import pywrapcp

First we create our solver:

solver = pywrapcp.Solver("N-queens")

We should now declare our queens' coordinates as variables. We have a NxN board. So we should keep the x and y coordinates. While the first thing came to mind for storing them is to create a 2-D array, a smarter way to do is to store them in a 1-D array:  with the index representing the y coordinate(row), and the value representing the x coordinate(column).
We want our variables to be in range of 0..N. For this purpose, we are going to use "solver.IntVar". "IntVar" is an expression which allows us to define a variable's bound. Let's create our variables in one line:

queens = [solver.IntVar(0, N - 1, "q%i" % i) for i in range(N)]

That was easy, right? Now we have named our variables as "q0, q1, .., q(N-1)".

It is time to add our constraints to our solver. Basically, we have 2 constraints. We have to ensure that no two queens are:

  1. On the same column: No two values of the array can be the same.
  2. On the same diagonal: The values plus(and minus) the indices should be all different.
We can add the first constraint like this:

solver.Add(solver.AllDifferent(queens))

"solver.AllDifferent" enforces a set of variables to take distinct values. We can use it for the diagonal constraints too:

solver.Add(solver.AllDifferent([queens[i] + i for i in range(N)]))
solver.Add(solver.AllDifferent([queens[i] - i for i in range(N)]))

We are now ready for solving the problem. First, we should build our search tree. Using the "solver.Phase", we tell the solver what to solve. The first parameter will be our set of variables. The second parameter specifies how to choose the next "IntVar" variable to be selected in the search. The third parameter indicates what value to assign to the selected "IntVar". Since for this post the speed is not important, the default behaviors are chosen.

tree = solver.Phase(queens,
                    solver.INT_VAR_DEFAULT,
                    solver.INT_VALUE_DEFAULT)

After creating the search tree, we can now begin our search.

solver.NewSearch(tree)

We can print our solutions while iterating through them as:

    solution_count = 0
    while solver.NextSolution():
        solution_count += 1
        solution = [queens[i].Value() for i in range(N)]
        print "Solution %d:" % solution_count, solution
        for i in range(N):
            for j in range(N):
                if solution[i] == j:
                    print "Q",
                else:
                    print "_",
            print
        print

We have reached the end of our search.
The documentation says that it is just better practice to finish the search with the method "EndSearch".

solver.EndSearch()

That's it. I hope you enjoyed this walk through.

You can find the full code here:

See you next post.


12 Ağustos 2014 Salı

Finishing Backends

Hello reader,

Before we begin, here is how GSoC is going for me:


I have finished the static backend. In this backend, we aim to display a PNG image on IPython notebook. This is just the basic version of our VNC backend. We used IPython's `display_png()` function for displaying the PNG. I can say it is ready to merge.

On the other side, we solved the timer issue that I have mentioned in my previous post. We used a JavaScript timer with `setInterval()` function of JS window. We are sending 'poll' events from front-end (JS) to backend (python). When the backend receives these events, it generates a 'timer' event. Therefore, on the vispy canvas object that we created in IPython notebook, we can connect a callback function (like on_timer) and use it without any problem. This means we are able to do animations now. That is great!

Another good news is, we managed to hide the vispy canvas! That was a big problem from the beginning. Eric and Almar, two of the vispy-devs, proposed a solution: showing the canvas once and hiding it immediately. This solution was tested by myself before, but I couldn't manage to run it. After that, Almar took over the issue and solved it like a boss! The problem was (what I understood) Qt was refusing to draw when there is no visible canvas. So we are forcing the draw manually and voilà! Problem solved:


See you next post!

26 Temmuz 2014 Cumartesi

VNC Backend

Hello reader,

Last week we finished the Javascript part of our IPython-VNC backend. One of the tricky parts was to import Javascript code to IPython notebook as soon as user creates a vispy canvas. After solving this issue, we are now able to handle user events like mouse move, key press and mouse wheel.

About the implementation, we listen the html canvas in IPython notebook with Javascript. As soon as an event is detected, we generate the event with proper name and type according to our JSON spec. The generated event is sent to vispy with widget's `this.send()` method. This method allows us to send a message immediately from frontend to backend. When a message is received from frontend, the backend generates the appropriate vispy event. For instance, if we receive a mousepress event from frontend, we generate vispy_mousepress event in the backend. That way, user can use the conected `on_mouse_press` callback in vispy canvas.

We embeded an IPython DOMWidget into our VNC backend for ease the communication between JS and python. We want this backend to be very easy to use for a user. So s/he never needs to deal with listening events, sending them to python or creating an IPython widget or even coding in Javascript.

There are still some problems though. In mouse move event, Javascript can capture all of mouse's position. I mean every single pixel that mouse was on.. So when we are trying to generate and send mouse move event, it takes a lot of time. For example if we are trying to do a drag operation, a lag occurs because of this. Also, it is nearly impossible to capture the screen, convert it to PNG, encode it with base64 and send them through websocket in the speed of mouse move. So this is another reason why we have this lag.

Another problem is that we can not use a python timer. In vispy we use the backend's timer (QTimer for qt, etc.). But here, it is not possible to have a QTimer and IPython's event loop at the same time. We have to think a different way to have a timer. Or else we have to let go the timer based animation option.

See you next post!

6 Temmuz 2014 Pazar

Testing Screenshots

Hello reader,

This week I updated and tested the screenshot ability of Vispy.

Normally, Vispy utilizes `read_pixel()` fuction which basically read pixels from a GL context. This function returns numpy array but somehow an upside-down array. So what I did was to flip that array which is extremely easy in Python and adding a test for it. The test was very simple too. We draw to top left a white square on a black screen. Then after reading pixels, we check whether the image is flipped or not by calculating the sum of the pixels in the corners. If the top left corner's pixel values are greater than 0 and the rest are equal to 0, it means we correctly took the screenshot.  You should know that I have never dealt with drawing something with GL functions before, that was my first.

Now I am making another PR with Luke Campagnola's -a vispydev- pure python PNG writer. This will help us to remove another image library dependency, like PIL. More updates will be coming after it has been merged.

See you next post!

4 Temmuz 2014 Cuma

IPython Backend

Hello reader,

In the past weeks, I have tried to add the interactivity part on IPython. Since both IPython and our backend (glut, qt, etc.) need their own event loops, we didn't manage to handle user events yet.

Our only option was to enter an event loop whenever we capture an event from user, for example mouse press. This approach seems nearly impossible in GLUT (not in freeglut though), because the `glutMainLoop()` never returns. Using Qt's `processEvents()`, we managed this:



Using this method we are nearly finishing the static part. We are going to process events just whenever user sends an event. For the animation, we are still working on it.

Also, this week one of the Vispy-dev's, Almar Klein, has finished the base code of IPython backend in vispy/app. That means we are going to use IPython notebook officially! :)

See you next post!

19 Haziran 2014 Perşembe

Week 4&5: JSON Protocol and Implementation

Hello reader,

Last week I have my final exams and my thesis defence. Also, last week we have finished the JSON protocol. But, since I have nothing much to say I am going to post two weeks work in one post.

This week we tried to implement the interactivity part on IPython notebook. We have done a proof of concept using rain.py before, so I decided to re-use it again - Houston, we have a problem.

In rain.py, the GLUT backend is used, like most of the Vispy's demos. GLUT has a function named "glutMainLoop()" which enters the GLUT event processing loop. This is ok, but once called, this routine never returns. At this point our problem occurs. Since this function takes over control, our widgets stop syncing with Python. Thus, we are unable to handle user events!

We are currently searching a solution for that. Vispy-devs currently suggesting to avoid GLUT in such case.

Next post: The Solution (I hope!)

See you next post!


3 Haziran 2014 Salı

Week 3: Design of a JSON Protocol

Hello reader,

We are going to try handling user actions in the HTML canvas - where our visualization occurs. We have done something like that before but without IPython. Now it is time to try it in the IPython notebook. We are using JavaScript for listening the events on a canvas. So we need a protocol for sending this events to Vispy from JavaScript. Thus, we are designing a JSON protocol.

About the implementation, we are going to focus on the IPython notebook for now. It seems we have two options for that:

1- A Javascript function generates a JSON with the events, following user actions. This JSON is sent to Python via a Unicode trait in the widget. A Python callback function is automatically called when this trait is updated: it parses the JSON, generates the Event instances and passes them to Vispy.

2- We get closer to the widget machinery, following something like this.The idea is to write Python callbacks for user actions on browser's side. The widget machinery makes that possible in a clean way. Javascript is responsible for sending the appropriate properties (those that are defined in the JSON protocol) to the Python callbacks. The Python callback functions are responsible for raising the Vispy events.

My mentor (Cyrille Rossant) and I think the second approach is more elegant and straightforward. So we are concentrating on the former.

Back to the JSON protocol, we are trying to follow the same nomenclature and hierarchy as what is currently implemented in vispy.app. So far I wrote nearly half of it. It seems we need one or two weeks to implement these ideas.

Next week: Finishing Touches on JSON Protocol
See you next week!