Eye-controlled wheelchair – Guide and step-by-step tutorial

Posted: 10. July 2015 in Allgemein

Preface

Our aim is to provide additional assistance to people with severe illnesses like ALS or MS that lead to persons not being able to move anymore close to a lock-in-state, where an active and alive mind is locked inside a body with extremely limited communication possibilities left – eye movements. Our inspiration are people like the famous physician Stephen Hawking who now uses a computer speech syntheses that is controlled by a small muscle movement in his cheek or TEMPT, a graffiti artist who was locked inside his body for 7 (seven!) years before friends build him an eyetracker that allows him to paint again – virtually, using his eyes.

Of course developing medical devices comes with severe responsibility. We started with a small model using an educational robotic platform and moved on to a full size wheelchair later on.

However, please keep in mind that this project can only partially assist people with severe disabilities to get some freedom and independence back – they still have to rely on other people to care for them.

It is a bit sad discussing the limitations of a development if you start with such “noble ideals”, but as said, any invention or development should be taken into account for what it is good for and what are the limitations – or in other words you should do some technological impact assessment [without loosing your enthusiasm, I hope] 🙂

Before I’ll start with the walk-through, please keep in mind that the people this project is dedicated to can be extremely depended on other people’s help and care – (re)act, (re)build, (re)design and use it with care and thoughtfully. I cannot be held responsible for anything you do with this idea.

Having said that – let’s begin with the fun part! All files, ideas and concepts are published under CC so basically do anything you want as long as you point to the origins and don’t make profit with it 🙂

 

1. Step / Raspi install and configuration

I really advise to use the new quadcore Raspberry Pi 2 (B) – it is way faster than the older one and results (with quadcore-support in openCV; I’ll come to this later) in an much better user experience if you control anything by your eyes, believe me 🙂

Adafruit has a very good tutorial on getting a raspi started. I used the wheezy-debian distribution, you can download from here:

http://downloads.raspberrypi.org/raspbian_latest

What you get is a compressed Imagefile, you need to unzip it. Then you have a .iso file which you can write to an empty SD-Card. Here is the tutorial from Adafruit for preparing an SD-Card for a Mac: https://learn.adafruit.com/adafruit-raspberry-pi-lesson-1-preparing-and-sd-card-for-your-raspberry-pi/making-an-sd-card-using-a-mac

…. and same with windows: https://learn.adafruit.com/adafruit-raspberry-pi-lesson-1-preparing-and-sd-card-for-your-raspberry-pi/making-an-sd-card-using-a-windows-vista-slash-7

After you completed these steps you can eject the card and start with further software installations:

After the project was presented at the German science competition “Jugend forscht” I found a great tutorial from Adrian Rosebrock proving that it is indeed possible to compile openCV for a platform like the Raspberry (ARM-CPU) with multicore support.

Since the Raspi has a quadcore this will nearly make the whole detection process 4 times as fast as before – a very convenient way to adjust and use the tracker is your reward for following the steps in his tutorial – but I will cover them in a separate post, because he uses a virtual environment which is great, but somehow confusing at the beginning. So i tried to install this in general and it works with a slight change. I’ll post a detailed explanation about this.

Starting/Configuring the Arduino&Raspberry

You can now start the graphical desktop by typing “startx”. In the developer menue of the start menue you will now find the Arduino IDE.

Connect the Arduino to the Raspberry Pi via USB and then start Arduino IDE. Here you select Open / Examples / StandardFirmata.

Then select the board (e.g. Arduino Uno/Mega) and the Port it is connected to.

After that just compile and upload the firmata sketch.

You than attach a normal webcam using this test script – it uses the analog input 0-3 for the adjustment of the areas where a pupils would be considered as a command for directions (which still have to be confirmed; otherwise the wheelchair would follow all eye movements!).

 

Hardware: to-do list for writing a tutorial 🙂

  • camera: USB or PIcam?
  • Performance did not seem to make a difference; I’ll publish some tests soon…
  • Arduino and potis
  • Arduino and collision detection
  • Confirming the command
Advertisements
Comments
  1. […] 3D cada parte del sensor de eyetracker están en Github, mientras que Myrijam ha colgado en su blog un tutorial que muestra cómo fabricar la […]

    Like

  2. Federico says:

    Genius… really. Thank you for your support to social future.

    Like

  3. […] 3D cada parte del sensor de eyetracker están en Github, mientras que Myrijam ha colgado en su blog un tutorial que muestra cómo fabricar la […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s