Amazing, I can spend 2 weeks in Berlin – my favourite city

Paul and me were invited to Tag der Talente (“talents’s day”). It’s an event where young students, talents in different fields (like art, literature, dance, language and of course STEM)  can meet each other and do workshops.

FotosScreenSnapz001We did the introduction yesterday with your project live on stage!

It is a great atmosphere and I will attend the workshop “Junior Lab” today – can’t wait to see what we can do there 🙂




From Wednesday on I will attend WEAR-IT, an amazing art and science festival in Berlin, dedicated to wearable electronics and fashion technologies. It looks like my dreams get real, since I always love to build something (IT-like) you can wear; like assistive technology or art. I can’t wait to meet the designers, tinkerers and like-minded people there!

And the following week I’ll meet a lot of friends and students from Jugend forscht (German science fair) again, since we are invited to meet Dr. Angela Merkel, our Federal Chancellor…

I’ll post some more infos soon, must leave for the “Junior Lab” workshop now 🙂

Finally the Assistive Context-Aware Toolkit (ACAT) developed from Intel is available as Open Source!! It is actually used for Stephen Hawking: “ACAT is useful for Microsoft Windows developers who are interested in developing assistive technologies to people with ALS or similar disabilities. Also for researchers who are working on new user interfaces, new sensing modalities or word prediction and wanting to explore these innovations in the this community.”, it’s made available under the Apache-Lizenz Version 2.0.
More information is on the website:

This is great because the software was not publicly available over ten years – now everybody can use it!!

4x faster with quadcore-support!!

Posted: 10. July 2015 in Allgemein

Software for the wheelchair controlled by eyetracker: Now with multicore support for Raspberry Pi 2 B!

Installation of openCV, the Firmata Tools and the corresponding Python bindings

This brief tutorial assumes you have just created an SD-Card for the Raspberry 2B and started it the first time (then the config-tool prompts you to make some adjustments, like expand filesystem; change localisation etc.). Adafruit has a good tutorial on this, e.g.

If you want to use the Pi-Camera, please enable it in the configuration with raspi-config (will start after first boot anyway).

The Debian Image used is the Debian Wheezy (from 2015-05-05; download here: .

The Log in will be user: pi and password: raspberry (mind a different keyboard-layout; e.g. with German keyboard the z and y are exchanged!)


Installing OpenCV with multicore support:

Here I will follow Adrian Rosebrock’s tutorial, except for the virtual environment as is makes things more complicated for this scenario. You can find the tutorial here:


Type the following commands to get everything updated:

  • sudo apt-get update
  • sudo apt-get upgrade
  • sudo rpi-update

1) Then some developer tools will be installed:

  • sudo apt-get install build-essential cmake pkg-config

2) … and then some software for loading and handling images and some GUI tools

  • sudo apt-get install libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev
  • sudo apt-get install libgtk2.0-dev

4) After that software for video-processing is required:

  • sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev

5) The next step is to install libraries used within openCV

  • sudo apt-get install libatlas-base-dev gfortran

Then in step 6 you will install pip, a tool to easily install other packages

I drop step 7 of Adrian’s tutorial (virtual environment) and continue with step 8…


Step 8 is to install/compile Python. Please mind the „-j4“ option in the make command. It will compile on all 4 cores making this step much faster (suggestions from discussion in Adrian’s tutorial):

  • sudo apt-get install python2.7-dev

Then we need support for arrays in python:

  • sudo pip install numpy –upgrade


Step 9 is getting source of openCV and install/compile it


This takes some time! Only for Raspi 2B: “-j4” makes it much faster, using 4 cores for compiling!

then do a

  • sudo make install


  • sudo ldconfig

NOW you have opencv 2.4.10 installed 🙂


Next step: Pyfirmata for controlling the Arduino over USB from your Raspberry:

  • sudo pip install pyfirmata

(Pyfirmata was short, wasn’t it?)


This should give you OpenCV 2.4.10 with quadcore support 🙂 Now the tracker runs much faster! I haven’t found any improvements with Pi Camera, but I will try this again soon…


Our aim is to provide additional assistance to people with severe illnesses like ALS or MS that lead to persons not being able to move anymore close to a lock-in-state, where an active and alive mind is locked inside a body with extremely limited communication possibilities left – eye movements. Our inspiration are people like the famous physician Stephen Hawking who now uses a computer speech syntheses that is controlled by a small muscle movement in his cheek or TEMPT, a graffiti artist who was locked inside his body for 7 (seven!) years before friends build him an eyetracker that allows him to paint again – virtually, using his eyes.

Of course developing medical devices comes with severe responsibility. We started with a small model using an educational robotic platform and moved on to a full size wheelchair later on.

However, please keep in mind that this project can only partially assist people with severe disabilities to get some freedom and independence back – they still have to rely on other people to care for them.

It is a bit sad discussing the limitations of a development if you start with such “noble ideals”, but as said, any invention or development should be taken into account for what it is good for and what are the limitations – or in other words you should do some technological impact assessment [without loosing your enthusiasm, I hope] 🙂

Before I’ll start with the walk-through, please keep in mind that the people this project is dedicated to can be extremely depended on other people’s help and care – (re)act, (re)build, (re)design and use it with care and thoughtfully. I cannot be held responsible for anything you do with this idea.

Having said that – let’s begin with the fun part! All files, ideas and concepts are published under CC so basically do anything you want as long as you point to the origins and don’t make profit with it 🙂


1. Step / Raspi install and configuration

I really advise to use the new quadcore Raspberry Pi 2 (B) – it is way faster than the older one and results (with quadcore-support in openCV; I’ll come to this later) in an much better user experience if you control anything by your eyes, believe me 🙂

Adafruit has a very good tutorial on getting a raspi started. I used the wheezy-debian distribution, you can download from here:

What you get is a compressed Imagefile, you need to unzip it. Then you have a .iso file which you can write to an empty SD-Card. Here is the tutorial from Adafruit for preparing an SD-Card for a Mac:

…. and same with windows:

After you completed these steps you can eject the card and start with further software installations:

After the project was presented at the German science competition “Jugend forscht” I found a great tutorial from Adrian Rosebrock proving that it is indeed possible to compile openCV for a platform like the Raspberry (ARM-CPU) with multicore support.

Since the Raspi has a quadcore this will nearly make the whole detection process 4 times as fast as before – a very convenient way to adjust and use the tracker is your reward for following the steps in his tutorial – but I will cover them in a separate post, because he uses a virtual environment which is great, but somehow confusing at the beginning. So i tried to install this in general and it works with a slight change. I’ll post a detailed explanation about this.

Starting/Configuring the Arduino&Raspberry

You can now start the graphical desktop by typing “startx”. In the developer menue of the start menue you will now find the Arduino IDE.

Connect the Arduino to the Raspberry Pi via USB and then start Arduino IDE. Here you select Open / Examples / StandardFirmata.

Then select the board (e.g. Arduino Uno/Mega) and the Port it is connected to.

After that just compile and upload the firmata sketch.

You than attach a normal webcam using this test script – it uses the analog input 0-3 for the adjustment of the areas where a pupils would be considered as a command for directions (which still have to be confirmed; otherwise the wheelchair would follow all eye movements!).


Hardware: to-do list for writing a tutorial 🙂

  • camera: USB or PIcam?
  • Performance did not seem to make a difference; I’ll publish some tests soon…
  • Arduino and potis
  • Arduino and collision detection
  • Confirming the command

Project files online now!

Posted: 10. July 2015 in Allgemein


summer holidays are great. The sun ins shining, I’ll spend the next 3 weeks at the seaside … so I had to keep my promises.

Here is the github repository for the project:

I changed the code from the version we used in the competition and made it more responsive. If your eye is in one of the “target direction” fields, the videocapture stills runs making adjustments easier (prior: waiting for about 2 secs for confirmation of direction). And now there is a TTS machine integrated, a speech synthesis, to give the direction commands extracted from eyemovement by voice – then they have to be confirmed. This is still a simple switch at the moment, but I am experimenting with electric activity from muscles to be used as a trigger [I can use electric muscle activity already for getting left/right eye movement; but you have to “glue” electrodes to the sides of your face and neck] 🙂

The comments are in German, but I’ll translate them soon….

Bundessieg_Arbeitswelt Hello everyone! The German final is now two weeks ago but I probably  hadn’t any time to inform you about it. These five days were so impressive  and exhausting!! But we won one of the first prize!!! That’s really cool and  I  still can’t believe it! After the competition I thought I could just relax –  but      quite the contrary! We’ve got a lot of more attention then we  thought!! This    project is on display on really cool blogs like Hackaday,  Adafruit, Raspberry  Pi!!!! I’m very proud of it!!

After the competition has ended, we want to work on! Because there are so many possibilities! We already got some requests of disabled people or people caring for them!! This is our biggest motivation! We will translate our documentation in english and we will clear out the python code. When we are ready, I will publish all the things you will need to build it yourself 😉

Next week we will attend the German Final of Jugend forscht – after we have won the local and regional competitions. We also got a special prize for our project because it is an innovation for disabled people.

Jugend Forscht Landeswettbewerb 2015  Jugend forscht Fachgebiet Arbeitswelt Stand-Nr.: A-03 Auge steuert Rollstuhl - Eyetracking mit openCV Teilnehmer:  Links: Myrijam Stoetzer (14)  Rechts:  Paul Foltin (15) Schule / Institution / Betrieb:   Franz-Haniel-Gymnasium

Jugend Forscht Landeswettbewerb 2015
Jugend forscht Fachgebiet Arbeitswelt
Stand-Nr.: A-03
Auge steuert Rollstuhl – Eyetracking mit openCV
Links: Myrijam Stoetzer (14) Rechts: Paul Foltin (15)

Our project developed with the different levels of Jugend forscht – the higher we got the more sophisticated our design was because we spend even more time in-between working on it 😉

On the local level we had a little model platform performing as a wheelchair – it was controlled by an Odroid U3 (still the best choice, since it is extremly fast!). We build an Arduino aound it to take care of user input (calibration of tracker) via potentiometers and steering the H-Bridge we developed for the  little motors. After we won the local science fair we decided it is time for something bigger – so we went and bough a real wheelchair!

We spend 2 weekends soldering, drilling and 3D-printing and developed everything new from scratch. This time we used the brand new raspberry Pi 2B, because it was supposed to be much faster than the old one and much cheaper than the Odroid U3. And we took motors from window washers from a car junkyard. This time we used relais to reverse and control the motors, since they used a lot (!) of more current than our H-bride could take. Of cause you could develop an electronic solution for this, switching currents of up to 20A each, but we were short on time and wanted to keep the costs down (for easier rebuilding – and we used up the money we won in the first round as a price as well).

Then we won the regional competition – and decided it’s time for more safety: So we build another system that checks for objects in the way. If we would have more time, we would have loved to move to laser scanner, but at the moment this is off limits 😉 Ultrasound sensors may disturb each other if used close to each other, so we decided to take some IR Range sensors and experimented with them. We just finished now – and the competition will start next week…

You can find the paper we wrote for the competition here – we updates the previous version:


Our project is hereby officially licensed under the creative commons licence BY NC, meaning it is free for everybody to rebuild, modify, improve it – as long as you mention the project’s origin. Oh, and of cause you are not allowed to make money with it – it is free for everybody for the benefit of all 😉

Wish us luck for next week’s competition – I will post the results here … 

Thanks, Myrijam.