26 Temmuz 2014 Cumartesi

VNC Backend

Hello reader,

Last week we finished the Javascript part of our IPython-VNC backend. One of the tricky parts was to import Javascript code to IPython notebook as soon as user creates a vispy canvas. After solving this issue, we are now able to handle user events like mouse move, key press and mouse wheel.

About the implementation, we listen the html canvas in IPython notebook with Javascript. As soon as an event is detected, we generate the event with proper name and type according to our JSON spec. The generated event is sent to vispy with widget's `this.send()` method. This method allows us to send a message immediately from frontend to backend. When a message is received from frontend, the backend generates the appropriate vispy event. For instance, if we receive a mousepress event from frontend, we generate vispy_mousepress event in the backend. That way, user can use the conected `on_mouse_press` callback in vispy canvas.

We embeded an IPython DOMWidget into our VNC backend for ease the communication between JS and python. We want this backend to be very easy to use for a user. So s/he never needs to deal with listening events, sending them to python or creating an IPython widget or even coding in Javascript.

There are still some problems though. In mouse move event, Javascript can capture all of mouse's position. I mean every single pixel that mouse was on.. So when we are trying to generate and send mouse move event, it takes a lot of time. For example if we are trying to do a drag operation, a lag occurs because of this. Also, it is nearly impossible to capture the screen, convert it to PNG, encode it with base64 and send them through websocket in the speed of mouse move. So this is another reason why we have this lag.

Another problem is that we can not use a python timer. In vispy we use the backend's timer (QTimer for qt, etc.). But here, it is not possible to have a QTimer and IPython's event loop at the same time. We have to think a different way to have a timer. Or else we have to let go the timer based animation option.

See you next post!

6 Temmuz 2014 Pazar

Testing Screenshots

Hello reader,

This week I updated and tested the screenshot ability of Vispy.

Normally, Vispy utilizes `read_pixel()` fuction which basically read pixels from a GL context. This function returns numpy array but somehow an upside-down array. So what I did was to flip that array which is extremely easy in Python and adding a test for it. The test was very simple too. We draw to top left a white square on a black screen. Then after reading pixels, we check whether the image is flipped or not by calculating the sum of the pixels in the corners. If the top left corner's pixel values are greater than 0 and the rest are equal to 0, it means we correctly took the screenshot.  You should know that I have never dealt with drawing something with GL functions before, that was my first.

Now I am making another PR with Luke Campagnola's -a vispydev- pure python PNG writer. This will help us to remove another image library dependency, like PIL. More updates will be coming after it has been merged.

See you next post!

4 Temmuz 2014 Cuma

IPython Backend

Hello reader,

In the past weeks, I have tried to add the interactivity part on IPython. Since both IPython and our backend (glut, qt, etc.) need their own event loops, we didn't manage to handle user events yet.

Our only option was to enter an event loop whenever we capture an event from user, for example mouse press. This approach seems nearly impossible in GLUT (not in freeglut though), because the `glutMainLoop()` never returns. Using Qt's `processEvents()`, we managed this:



Using this method we are nearly finishing the static part. We are going to process events just whenever user sends an event. For the animation, we are still working on it.

Also, this week one of the Vispy-dev's, Almar Klein, has finished the base code of IPython backend in vispy/app. That means we are going to use IPython notebook officially! :)

See you next post!