Hello reader,
Before we begin, here is how GSoC is going for me:
I have finished the static backend. In this backend, we aim to display a PNG image on IPython notebook. This is just the basic version of our VNC backend. We used IPython's `display_png()` function for displaying the PNG. I can say it is ready to merge.
On the other side, we solved the timer issue that I have mentioned in my previous post. We used a JavaScript timer with `setInterval()` function of JS window. We are sending 'poll' events from front-end (JS) to backend (python). When the backend receives these events, it generates a 'timer' event. Therefore, on the vispy canvas object that we created in IPython notebook, we can connect a callback function (like on_timer) and use it without any problem. This means we are able to do animations now. That is great!
Another good news is, we managed to hide the vispy canvas! That was a big problem from the beginning. Eric and Almar, two of the vispy-devs, proposed a solution: showing the canvas once and hiding it immediately. This solution was tested by myself before, but I couldn't manage to run it. After that, Almar took over the issue and solved it like a boss! The problem was (what I understood) Qt was refusing to draw when there is no visible canvas. So we are forcing the draw manually and voilà! Problem solved:
See you next post!
ipython etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster
ipython etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster
12 Ağustos 2014 Salı
26 Temmuz 2014 Cumartesi
VNC Backend
Hello reader,
Last week we finished the Javascript part of our IPython-VNC backend. One of the tricky parts was to import Javascript code to IPython notebook as soon as user creates a vispy canvas. After solving this issue, we are now able to handle user events like mouse move, key press and mouse wheel.
About the implementation, we listen the html canvas in IPython notebook with Javascript. As soon as an event is detected, we generate the event with proper name and type according to our JSON spec. The generated event is sent to vispy with widget's `this.send()` method. This method allows us to send a message immediately from frontend to backend. When a message is received from frontend, the backend generates the appropriate vispy event. For instance, if we receive a mousepress event from frontend, we generate vispy_mousepress event in the backend. That way, user can use the conected `on_mouse_press` callback in vispy canvas.
We embeded an IPython DOMWidget into our VNC backend for ease the communication between JS and python. We want this backend to be very easy to use for a user. So s/he never needs to deal with listening events, sending them to python or creating an IPython widget or even coding in Javascript.
There are still some problems though. In mouse move event, Javascript can capture all of mouse's position. I mean every single pixel that mouse was on.. So when we are trying to generate and send mouse move event, it takes a lot of time. For example if we are trying to do a drag operation, a lag occurs because of this. Also, it is nearly impossible to capture the screen, convert it to PNG, encode it with base64 and send them through websocket in the speed of mouse move. So this is another reason why we have this lag.
Another problem is that we can not use a python timer. In vispy we use the backend's timer (QTimer for qt, etc.). But here, it is not possible to have a QTimer and IPython's event loop at the same time. We have to think a different way to have a timer. Or else we have to let go the timer based animation option.
See you next post!
Last week we finished the Javascript part of our IPython-VNC backend. One of the tricky parts was to import Javascript code to IPython notebook as soon as user creates a vispy canvas. After solving this issue, we are now able to handle user events like mouse move, key press and mouse wheel.
About the implementation, we listen the html canvas in IPython notebook with Javascript. As soon as an event is detected, we generate the event with proper name and type according to our JSON spec. The generated event is sent to vispy with widget's `this.send()` method. This method allows us to send a message immediately from frontend to backend. When a message is received from frontend, the backend generates the appropriate vispy event. For instance, if we receive a mousepress event from frontend, we generate vispy_mousepress event in the backend. That way, user can use the conected `on_mouse_press` callback in vispy canvas.
We embeded an IPython DOMWidget into our VNC backend for ease the communication between JS and python. We want this backend to be very easy to use for a user. So s/he never needs to deal with listening events, sending them to python or creating an IPython widget or even coding in Javascript.
There are still some problems though. In mouse move event, Javascript can capture all of mouse's position. I mean every single pixel that mouse was on.. So when we are trying to generate and send mouse move event, it takes a lot of time. For example if we are trying to do a drag operation, a lag occurs because of this. Also, it is nearly impossible to capture the screen, convert it to PNG, encode it with base64 and send them through websocket in the speed of mouse move. So this is another reason why we have this lag.
Another problem is that we can not use a python timer. In vispy we use the backend's timer (QTimer for qt, etc.). But here, it is not possible to have a QTimer and IPython's event loop at the same time. We have to think a different way to have a timer. Or else we have to let go the timer based animation option.
See you next post!
4 Temmuz 2014 Cuma
IPython Backend
Hello reader,
In the past weeks, I have tried to add the interactivity part on IPython. Since both IPython and our backend (glut, qt, etc.) need their own event loops, we didn't manage to handle user events yet.
Our only option was to enter an event loop whenever we capture an event from user, for example mouse press. This approach seems nearly impossible in GLUT (not in freeglut though), because the `glutMainLoop()` never returns. Using Qt's `processEvents()`, we managed this:
Using this method we are nearly finishing the static part. We are going to process events just whenever user sends an event. For the animation, we are still working on it.
Also, this week one of the Vispy-dev's, Almar Klein, has finished the base code of IPython backend in vispy/app. That means we are going to use IPython notebook officially! :)
See you next post!
In the past weeks, I have tried to add the interactivity part on IPython. Since both IPython and our backend (glut, qt, etc.) need their own event loops, we didn't manage to handle user events yet.
Our only option was to enter an event loop whenever we capture an event from user, for example mouse press. This approach seems nearly impossible in GLUT (not in freeglut though), because the `glutMainLoop()` never returns. Using Qt's `processEvents()`, we managed this:
Using this method we are nearly finishing the static part. We are going to process events just whenever user sends an event. For the animation, we are still working on it.
Also, this week one of the Vispy-dev's, Almar Klein, has finished the base code of IPython backend in vispy/app. That means we are going to use IPython notebook officially! :)
See you next post!
19 Haziran 2014 Perşembe
Week 4&5: JSON Protocol and Implementation
Hello reader,
Last week I have my final exams and my thesis defence. Also, last week we have finished the JSON protocol. But, since I have nothing much to say I am going to post two weeks work in one post.
This week we tried to implement the interactivity part on IPython notebook. We have done a proof of concept using rain.py before, so I decided to re-use it again - Houston, we have a problem.
In rain.py, the GLUT backend is used, like most of the Vispy's demos. GLUT has a function named "glutMainLoop()" which enters the GLUT event processing loop. This is ok, but once called, this routine never returns. At this point our problem occurs. Since this function takes over control, our widgets stop syncing with Python. Thus, we are unable to handle user events!
We are currently searching a solution for that. Vispy-devs currently suggesting to avoid GLUT in such case.
Next post: The Solution (I hope!)
See you next post!
Last week I have my final exams and my thesis defence. Also, last week we have finished the JSON protocol. But, since I have nothing much to say I am going to post two weeks work in one post.
This week we tried to implement the interactivity part on IPython notebook. We have done a proof of concept using rain.py before, so I decided to re-use it again - Houston, we have a problem.
In rain.py, the GLUT backend is used, like most of the Vispy's demos. GLUT has a function named "glutMainLoop()" which enters the GLUT event processing loop. This is ok, but once called, this routine never returns. At this point our problem occurs. Since this function takes over control, our widgets stop syncing with Python. Thus, we are unable to handle user events!
We are currently searching a solution for that. Vispy-devs currently suggesting to avoid GLUT in such case.
Next post: The Solution (I hope!)
See you next post!
25 Mayıs 2014 Pazar
Week 2: IPython - Proof of Concept
Hello reader,
I have done an another proof of concept using IPython widgets.
In this proof of concept, we aimed to re-implement the same thing that we did before. But of course with IPython notebook.
The main idea was to have a widget with a string (Unicode) trait attribute. This trait should contain the base64 encoded representation of the PNG image. This widget captures the screen on every update and saves this image. Then encodes this image with base64. As soon as it updates its value with encoded representation of the PNG, the JavaScript part also updates itself simultaneously. This is the power of the IPython's widget machinery.
We have one problem though. We are capturing the screen with glReadPixels(). This works fine, but in the end we have numpy.ndarray in our hands. So, in order to have a PNG image, we were using PIL image library and its Image.fromarray() function to convert the ndarray into a PNG. Then, we were saving this image onto cStringIO buffer and encoding with base64. Unfortunately, this approach is very slow. We were also having this issue on the past proof of concepts, however we solved this problem using threading. But then, after a discussion we have agreed upon not using threads.
In order to solve this problem, one of the Vispy-devs, Luke Campagnola, has done a great job, wrote a pure python PNG writer. This helped us remove the PIL dependency and the need of saving onto a buffer. After some benchmarks we saw that it was the zlib.compress() function which slows down the process. But since this is still a proof of concept, we decided to let it go.
I think this proof of concept also went well. What we need now is some interactivity. We will listen user events with JavaScript and send them to Vispy to handle them. For this purpose, we are going to design a protocol (Wow, I am going to design a protocol!). In this protocol we will define the JSON representations of user events.
Next stop: Design of a JSON protocol
See you next week!
I have done an another proof of concept using IPython widgets.
In this proof of concept, we aimed to re-implement the same thing that we did before. But of course with IPython notebook.
The main idea was to have a widget with a string (Unicode) trait attribute. This trait should contain the base64 encoded representation of the PNG image. This widget captures the screen on every update and saves this image. Then encodes this image with base64. As soon as it updates its value with encoded representation of the PNG, the JavaScript part also updates itself simultaneously. This is the power of the IPython's widget machinery.
We have one problem though. We are capturing the screen with glReadPixels(). This works fine, but in the end we have numpy.ndarray in our hands. So, in order to have a PNG image, we were using PIL image library and its Image.fromarray() function to convert the ndarray into a PNG. Then, we were saving this image onto cStringIO buffer and encoding with base64. Unfortunately, this approach is very slow. We were also having this issue on the past proof of concepts, however we solved this problem using threading. But then, after a discussion we have agreed upon not using threads.
In order to solve this problem, one of the Vispy-devs, Luke Campagnola, has done a great job, wrote a pure python PNG writer. This helped us remove the PIL dependency and the need of saving onto a buffer. After some benchmarks we saw that it was the zlib.compress() function which slows down the process. But since this is still a proof of concept, we decided to let it go.
I think this proof of concept also went well. What we need now is some interactivity. We will listen user events with JavaScript and send them to Vispy to handle them. For this purpose, we are going to design a protocol (Wow, I am going to design a protocol!). In this protocol we will define the JSON representations of user events.
Next stop: Design of a JSON protocol
See you next week!
Kaydol:
Kayıtlar (Atom)