This is really incredible – WASP is a cool Italian Company developing 3D-Printers with a vision to support people with it (3D print in architecture) to make the world a better place. That’s where their name comes from: World’s Advanced Saving Project.

IMG_6122I contacted them to ask if I may get a discount on their printers or a kit instead of a readymade printer (well, you know, I love building things) – because for the Eyecontrolled Wheelchair and my next project, the 3D-Scanner for Magnetic fields, I needed to print a lot of designs during the development process.IMG_6995

Marino and Stefano from WASP were so amazed by the project, that they decided to send me one! When I got their answer by mail, it was extremely hard to remain calm and not frak out due to happiness – I was in school that time (yeah, smartphones and schools – a combination most teachers do not like)

THANKS a lot to the WASP-Team

Wear IT

Posted: 15. June 2016 in Allgemein

A few months ago I was in Berlin at the WearIT festival! It took place in an old industrial building. It was a really cool location! There were many interesting talks and I learned a lot of wearable electronics! I also gave a talk in English. I was very nervous, but during my talk I was nearly relaxed! Besides the people there were so nice and cute! They motivated me and took care of me! I really enjoyed this three days! To make the acquaintance of people who are interested in electronics helped me to get along with my project. I met Niklas Roy there, he is a really cool technic talented guy who gave me his electric wheelchair which he used in his project “Gallerydrive”, here’s his blog. He made the huge and very(!) powerful wheelchair following lines an the gallery floor while reading out information about displayed art.
And I got all the schematics as well, so I don’t have to worry about reverse engineering the way the controller-joystick works in combination with the huge and powerful motors. I’ll try to integrate that to my eye-controlled wheelchair…



Amazing, I can spend 2 weeks in Berlin – my favourite city

Paul and me were invited to Tag der Talente (“talents’s day”). It’s an event where young students, talents in different fields (like art, literature, dance, language and of course STEM)  can meet each other and do workshops.

FotosScreenSnapz001We did the introduction yesterday with your project live on stage!

It is a great atmosphere and I will attend the workshop “Junior Lab” today – can’t wait to see what we can do there 🙂




From Wednesday on I will attend WEAR-IT, an amazing art and science festival in Berlin, dedicated to wearable electronics and fashion technologies. It looks like my dreams get real, since I always love to build something (IT-like) you can wear; like assistive technology or art. I can’t wait to meet the designers, tinkerers and like-minded people there!

And the following week I’ll meet a lot of friends and students from Jugend forscht (German science fair) again, since we are invited to meet Dr. Angela Merkel, our Federal Chancellor…

I’ll post some more infos soon, must leave for the “Junior Lab” workshop now 🙂

Finally the Assistive Context-Aware Toolkit (ACAT) developed from Intel is available as Open Source!! It is actually used for Stephen Hawking: “ACAT is useful for Microsoft Windows developers who are interested in developing assistive technologies to people with ALS or similar disabilities. Also for researchers who are working on new user interfaces, new sensing modalities or word prediction and wanting to explore these innovations in the this community.”, it’s made available under the Apache-Lizenz Version 2.0.
More information is on the website:

This is great because the software was not publicly available over ten years – now everybody can use it!!

4x faster with quadcore-support!!

Posted: 10. July 2015 in Allgemein

Software for the wheelchair controlled by eyetracker: Now with multicore support for Raspberry Pi 2 B!

Installation of openCV, the Firmata Tools and the corresponding Python bindings

This brief tutorial assumes you have just created an SD-Card for the Raspberry 2B and started it the first time (then the config-tool prompts you to make some adjustments, like expand filesystem; change localisation etc.). Adafruit has a good tutorial on this, e.g.

If you want to use the Pi-Camera, please enable it in the configuration with raspi-config (will start after first boot anyway).

The Debian Image used is the Debian Wheezy (from 2015-05-05; download here: .

The Log in will be user: pi and password: raspberry (mind a different keyboard-layout; e.g. with German keyboard the z and y are exchanged!)


Installing OpenCV with multicore support:

Here I will follow Adrian Rosebrock’s tutorial, except for the virtual environment as is makes things more complicated for this scenario. You can find the tutorial here:


Type the following commands to get everything updated:

  • sudo apt-get update
  • sudo apt-get upgrade
  • sudo rpi-update

1) Then some developer tools will be installed:

  • sudo apt-get install build-essential cmake pkg-config

2) … and then some software for loading and handling images and some GUI tools

  • sudo apt-get install libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev
  • sudo apt-get install libgtk2.0-dev

4) After that software for video-processing is required:

  • sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev

5) The next step is to install libraries used within openCV

  • sudo apt-get install libatlas-base-dev gfortran

Then in step 6 you will install pip, a tool to easily install other packages

I drop step 7 of Adrian’s tutorial (virtual environment) and continue with step 8…


Step 8 is to install/compile Python. Please mind the „-j4“ option in the make command. It will compile on all 4 cores making this step much faster (suggestions from discussion in Adrian’s tutorial):

  • sudo apt-get install python2.7-dev

Then we need support for arrays in python:

  • sudo pip install numpy –upgrade


Step 9 is getting source of openCV and install/compile it


This takes some time! Only for Raspi 2B: “-j4” makes it much faster, using 4 cores for compiling!

then do a

  • sudo make install


  • sudo ldconfig

NOW you have opencv 2.4.10 installed 🙂


Next step: Pyfirmata for controlling the Arduino over USB from your Raspberry:

  • sudo pip install pyfirmata

(Pyfirmata was short, wasn’t it?)


This should give you OpenCV 2.4.10 with quadcore support 🙂 Now the tracker runs much faster! I haven’t found any improvements with Pi Camera, but I will try this again soon…


Our aim is to provide additional assistance to people with severe illnesses like ALS or MS that lead to persons not being able to move anymore close to a lock-in-state, where an active and alive mind is locked inside a body with extremely limited communication possibilities left – eye movements. Our inspiration are people like the famous physician Stephen Hawking who now uses a computer speech syntheses that is controlled by a small muscle movement in his cheek or TEMPT, a graffiti artist who was locked inside his body for 7 (seven!) years before friends build him an eyetracker that allows him to paint again – virtually, using his eyes.

Of course developing medical devices comes with severe responsibility. We started with a small model using an educational robotic platform and moved on to a full size wheelchair later on.

However, please keep in mind that this project can only partially assist people with severe disabilities to get some freedom and independence back – they still have to rely on other people to care for them.

It is a bit sad discussing the limitations of a development if you start with such “noble ideals”, but as said, any invention or development should be taken into account for what it is good for and what are the limitations – or in other words you should do some technological impact assessment [without loosing your enthusiasm, I hope] 🙂

Before I’ll start with the walk-through, please keep in mind that the people this project is dedicated to can be extremely depended on other people’s help and care – (re)act, (re)build, (re)design and use it with care and thoughtfully. I cannot be held responsible for anything you do with this idea.

Having said that – let’s begin with the fun part! All files, ideas and concepts are published under CC so basically do anything you want as long as you point to the origins and don’t make profit with it 🙂


1. Step / Raspi install and configuration

I really advise to use the new quadcore Raspberry Pi 2 (B) – it is way faster than the older one and results (with quadcore-support in openCV; I’ll come to this later) in an much better user experience if you control anything by your eyes, believe me 🙂

Adafruit has a very good tutorial on getting a raspi started. I used the wheezy-debian distribution, you can download from here:

What you get is a compressed Imagefile, you need to unzip it. Then you have a .iso file which you can write to an empty SD-Card. Here is the tutorial from Adafruit for preparing an SD-Card for a Mac:

…. and same with windows:

After you completed these steps you can eject the card and start with further software installations:

After the project was presented at the German science competition “Jugend forscht” I found a great tutorial from Adrian Rosebrock proving that it is indeed possible to compile openCV for a platform like the Raspberry (ARM-CPU) with multicore support.

Since the Raspi has a quadcore this will nearly make the whole detection process 4 times as fast as before – a very convenient way to adjust and use the tracker is your reward for following the steps in his tutorial – but I will cover them in a separate post, because he uses a virtual environment which is great, but somehow confusing at the beginning. So i tried to install this in general and it works with a slight change. I’ll post a detailed explanation about this.

Starting/Configuring the Arduino&Raspberry

You can now start the graphical desktop by typing “startx”. In the developer menue of the start menue you will now find the Arduino IDE.

Connect the Arduino to the Raspberry Pi via USB and then start Arduino IDE. Here you select Open / Examples / StandardFirmata.

Then select the board (e.g. Arduino Uno/Mega) and the Port it is connected to.

After that just compile and upload the firmata sketch.

You than attach a normal webcam using this test script – it uses the analog input 0-3 for the adjustment of the areas where a pupils would be considered as a command for directions (which still have to be confirmed; otherwise the wheelchair would follow all eye movements!).


Hardware: to-do list for writing a tutorial 🙂

  • camera: USB or PIcam?
  • Performance did not seem to make a difference; I’ll publish some tests soon…
  • Arduino and potis
  • Arduino and collision detection
  • Confirming the command

Project files online now!

Posted: 10. July 2015 in Allgemein


summer holidays are great. The sun ins shining, I’ll spend the next 3 weeks at the seaside … so I had to keep my promises.

Here is the github repository for the project:

I changed the code from the version we used in the competition and made it more responsive. If your eye is in one of the “target direction” fields, the videocapture stills runs making adjustments easier (prior: waiting for about 2 secs for confirmation of direction). And now there is a TTS machine integrated, a speech synthesis, to give the direction commands extracted from eyemovement by voice – then they have to be confirmed. This is still a simple switch at the moment, but I am experimenting with electric activity from muscles to be used as a trigger [I can use electric muscle activity already for getting left/right eye movement; but you have to “glue” electrodes to the sides of your face and neck] 🙂

The comments are in German, but I’ll translate them soon….