IMG_7006

Magnetic fields are a fascinating subject – they surround us, protect life on earth and allowed orientation, e.g. of migratory birds on far distances and for us humans, long before we relied on smartphones with GPS 😉

We have no sensory organ for magnetic fields, so in physics classes we discuss magnetic fields with the help of their force effects. Iron filings are used to visualize the force field of magnets magnetic, because they form line patterns indicating the magnetic field lines.

This is a very vivid visualisation – but there are several limitations: It is only qualitative (you cannot measure the magnetic flux density by this) and works only on the horizontal plane. Furthermore, what is, for example, “between” the lines that result from the alignment of the iron filings? Should the magnetic field not decrease gradually with distance, instead of significantly staggered?

My friend Finja and I tried several visualization approaches to circumvent these limitations. First, we experimented with ferrofluids (great fun, very messy, still only qualitative) and moved on to use sensors, microcontrollers and some other electronic bits and pieces.

Our first scanner was build from ABS tubes and customized 3d-printed parts we developed on our own. It was supposed to operate with parallel kinematics to have both motors in one place – not to interfere with the measurements taken. We did not get the right bolt drives for it and the strings slipped too much to be precise.

P1010921Since the deadline for the science fair was getting close, we chose another path and started all over again. This time we used an old fischertechnik plotter from the 1980’s that has been laying around in our school ever since (unopened!).

We exchanged the steel tubes for carbon ones and changed the design a little bit as well. Instead of a pencil it now carries a breakout board for an SMD magnetic sensor, commonly used in robotics to orient the robot according to earth’s magnetic field. It measures the magnetic flux density in 3 dimensions and with 8 different ranges – the HMC588L.

We used a motorshield, added an SD-Card-Shield we built from an Micro-SD-Card Adapter our self, an LCD-Display and Bluetooth to make the device controllable via a Smartphone-App.

P1010945

We can now analyse the magnetic field by measurement of its flux density and visualize it in 3D. For the moment we only scan regions (a plane) and measure the three-dimensional vector (BX | BY | BZ) | of every single measuring point with assigned coordinates (Y X) automatically.

The result is a vector field – a bit uncommon in 9th class math to P1010948work with 😉

But it is quite easy to understand the principle: If you imagine that the invisible magnetic flux can be measured and the direction of a force resulting from interaction with it would be visualized as an arrow, the direction of the arrow gives you the orientation in space and the length the value or “intensity”. In Physics, you use vectors on an incline to determine the forces acting on an object – and is very obvious that the direction and the value are important!

To imagine a vector field you can simply think of a grainfield, with every grain representing a vector. If no external fource acts upon them, the grains grow more or less identical (same orientation) and reach the same height. External forces can now change the orientation of vectors in this particular field, like a wind blowing over it.

Coming back to magnetic fields, we can determine the spatial orientation and intensity of the magnetic flux at any given point on a 2d plane (scanner area).Herz_88uT__4A

As a result, we have a 5D-vector field of the examined magnets, which can be visualized with the aid of mathematics programs such as Matlab and investigated further.

steppermotor-core--magnified-200x200 130µTWe have some nice examples of current loops and magnets. You can also visualize how the earth’ magnetic field is distorted by even small steel items like a keyhole saw blade.

For further development we want to extend the scanning device so that the height of the scans can be changed as well – then we would measure a 6D vector field…

 

mag_schraubenzieher2We won first prize in Physics on the federal state level of the science fair Jugend forscht (Schüler experientieren) and a special price for nondestructive examination.
A special thanks to Sebastian Groß from Mathworks – he supported us with the licences of Matlab we needed to visualize and analyse the data!

 

You can download the PDF here, but it’s in German. We might translate the most interesting parts later on…

MAG3D – Analyse von Magnetfeldlinien als 3D-Vektorfeld

 

 

This is really incredible – WASP is a cool Italian Company developing 3D-Printers with a vision to support people with it (3D print in architecture) to make the world a better place. That’s where their name comes from: World’s Advanced Saving Project.

IMG_6122I contacted them to ask if I may get a discount on their printers or a kit instead of a readymade printer (well, you know, I love building things) – because for the Eyecontrolled Wheelchair and my next project, the 3D-Scanner for Magnetic fields, I needed to print a lot of designs during the development process.IMG_6995

Marino and Stefano from WASP were so amazed by the project, that they decided to send me one! When I got their answer by mail, it was extremely hard to remain calm and not frak out due to happiness – I was in school that time (yeah, smartphones and schools – a combination most teachers do not like)

THANKS a lot to the WASP-Team

Wear IT

Posted: 15. June 2016 in Allgemein

A few months ago I was in Berlin at the WearIT festival! It took place in an old industrial building. It was a really cool location! There were many interesting talks and I learned a lot of wearable electronics! I also gave a talk in English. I was very nervous, but during my talk I was nearly relaxed! Besides the people there were so nice and cute! They motivated me and took care of me! I really enjoyed this three days! To make the acquaintance of people who are interested in electronics helped me to get along with my project. I met Niklas Roy there, he is a really cool technic talented guy who gave me his electric wheelchair which he used in his project “Gallerydrive”, here’s his blog. He made the huge and very(!) powerful wheelchair following lines an the gallery floor while reading out information about displayed art.
And I got all the schematics as well, so I don’t have to worry about reverse engineering the way the controller-joystick works in combination with the huge and powerful motors. I’ll try to integrate that to my eye-controlled wheelchair…

 

 

Amazing, I can spend 2 weeks in Berlin – my favourite city

Paul and me were invited to Tag der Talente (“talents’s day”). It’s an event where young students, talents in different fields (like art, literature, dance, language and of course STEM)  can meet each other and do workshops.

FotosScreenSnapz001We did the introduction yesterday with your project live on stage!

It is a great atmosphere and I will attend the workshop “Junior Lab” today – can’t wait to see what we can do there 🙂

 

 

 

From Wednesday on I will attend WEAR-IT, an amazing art and science festival in Berlin, dedicated to wearable electronics and fashion technologies. It looks like my dreams get real, since I always love to build something (IT-like) you can wear; like assistive technology or art. I can’t wait to meet the designers, tinkerers and like-minded people there!

And the following week I’ll meet a lot of friends and students from Jugend forscht (German science fair) again, since we are invited to meet Dr. Angela Merkel, our Federal Chancellor…

I’ll post some more infos soon, must leave for the “Junior Lab” workshop now 🙂

Finally the Assistive Context-Aware Toolkit (ACAT) developed from Intel is available as Open Source!! It is actually used for Stephen Hawking: “ACAT is useful for Microsoft Windows developers who are interested in developing assistive technologies to people with ALS or similar disabilities. Also for researchers who are working on new user interfaces, new sensing modalities or word prediction and wanting to explore these innovations in the this community.”, it’s made available under the Apache-Lizenz Version 2.0.
More information is on the website: https://01.org/acat.

This is great because the software was not publicly available over ten years – now everybody can use it!!

4x faster with quadcore-support!!

Posted: 10. July 2015 in Allgemein

Software for the wheelchair controlled by eyetracker: Now with multicore support for Raspberry Pi 2 B!

Installation of openCV, the Firmata Tools and the corresponding Python bindings

This brief tutorial assumes you have just created an SD-Card for the Raspberry 2B and started it the first time (then the config-tool prompts you to make some adjustments, like expand filesystem; change localisation etc.). Adafruit has a good tutorial on this, e.g.

If you want to use the Pi-Camera, please enable it in the configuration with raspi-config (will start after first boot anyway).

The Debian Image used is the Debian Wheezy (from 2015-05-05; download here: https://www.raspberrypi.org/downloads/) .

The Log in will be user: pi and password: raspberry (mind a different keyboard-layout; e.g. with German keyboard the z and y are exchanged!)

 

Installing OpenCV with multicore support:

Here I will follow Adrian Rosebrock’s tutorial, except for the virtual environment as is makes things more complicated for this scenario. You can find the tutorial here: http://www.pyimagesearch.com/2015/02/23/install-opencv-and-python-on-your-raspberry-pi-2-and-b/

 

Type the following commands to get everything updated:

  • sudo apt-get update
  • sudo apt-get upgrade
  • sudo rpi-update

1) Then some developer tools will be installed:

  • sudo apt-get install build-essential cmake pkg-config

2) … and then some software for loading and handling images and some GUI tools

  • sudo apt-get install libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev
  • sudo apt-get install libgtk2.0-dev

4) After that software for video-processing is required:

  • sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev

5) The next step is to install libraries used within openCV

  • sudo apt-get install libatlas-base-dev gfortran

Then in step 6 you will install pip, a tool to easily install other packages

I drop step 7 of Adrian’s tutorial (virtual environment) and continue with step 8…

 

Step 8 is to install/compile Python. Please mind the „-j4“ option in the make command. It will compile on all 4 cores making this step much faster (suggestions from discussion in Adrian’s tutorial):

  • sudo apt-get install python2.7-dev

Then we need support for arrays in python:

  • sudo pip install numpy –upgrade

 

Step 9 is getting source of openCV and install/compile it

 

This takes some time! Only for Raspi 2B: “-j4” makes it much faster, using 4 cores for compiling!

then do a

  • sudo make install

and

  • sudo ldconfig

NOW you have opencv 2.4.10 installed 🙂

 

Next step: Pyfirmata for controlling the Arduino over USB from your Raspberry:

  • sudo pip install pyfirmata

(Pyfirmata was short, wasn’t it?)

 

This should give you OpenCV 2.4.10 with quadcore support 🙂 Now the tracker runs much faster! I haven’t found any improvements with Pi Camera, but I will try this again soon…

Preface

Our aim is to provide additional assistance to people with severe illnesses like ALS or MS that lead to persons not being able to move anymore close to a lock-in-state, where an active and alive mind is locked inside a body with extremely limited communication possibilities left – eye movements. Our inspiration are people like the famous physician Stephen Hawking who now uses a computer speech syntheses that is controlled by a small muscle movement in his cheek or TEMPT, a graffiti artist who was locked inside his body for 7 (seven!) years before friends build him an eyetracker that allows him to paint again – virtually, using his eyes.

Of course developing medical devices comes with severe responsibility. We started with a small model using an educational robotic platform and moved on to a full size wheelchair later on.

However, please keep in mind that this project can only partially assist people with severe disabilities to get some freedom and independence back – they still have to rely on other people to care for them.

It is a bit sad discussing the limitations of a development if you start with such “noble ideals”, but as said, any invention or development should be taken into account for what it is good for and what are the limitations – or in other words you should do some technological impact assessment [without loosing your enthusiasm, I hope] 🙂

Before I’ll start with the walk-through, please keep in mind that the people this project is dedicated to can be extremely depended on other people’s help and care – (re)act, (re)build, (re)design and use it with care and thoughtfully. I cannot be held responsible for anything you do with this idea.

Having said that – let’s begin with the fun part! All files, ideas and concepts are published under CC so basically do anything you want as long as you point to the origins and don’t make profit with it 🙂

 

1. Step / Raspi install and configuration

I really advise to use the new quadcore Raspberry Pi 2 (B) – it is way faster than the older one and results (with quadcore-support in openCV; I’ll come to this later) in an much better user experience if you control anything by your eyes, believe me 🙂

Adafruit has a very good tutorial on getting a raspi started. I used the wheezy-debian distribution, you can download from here:

http://downloads.raspberrypi.org/raspbian_latest

What you get is a compressed Imagefile, you need to unzip it. Then you have a .iso file which you can write to an empty SD-Card. Here is the tutorial from Adafruit for preparing an SD-Card for a Mac: https://learn.adafruit.com/adafruit-raspberry-pi-lesson-1-preparing-and-sd-card-for-your-raspberry-pi/making-an-sd-card-using-a-mac

…. and same with windows: https://learn.adafruit.com/adafruit-raspberry-pi-lesson-1-preparing-and-sd-card-for-your-raspberry-pi/making-an-sd-card-using-a-windows-vista-slash-7

After you completed these steps you can eject the card and start with further software installations:

After the project was presented at the German science competition “Jugend forscht” I found a great tutorial from Adrian Rosebrock proving that it is indeed possible to compile openCV for a platform like the Raspberry (ARM-CPU) with multicore support.

Since the Raspi has a quadcore this will nearly make the whole detection process 4 times as fast as before – a very convenient way to adjust and use the tracker is your reward for following the steps in his tutorial – but I will cover them in a separate post, because he uses a virtual environment which is great, but somehow confusing at the beginning. So i tried to install this in general and it works with a slight change. I’ll post a detailed explanation about this.

Starting/Configuring the Arduino&Raspberry

You can now start the graphical desktop by typing “startx”. In the developer menue of the start menue you will now find the Arduino IDE.

Connect the Arduino to the Raspberry Pi via USB and then start Arduino IDE. Here you select Open / Examples / StandardFirmata.

Then select the board (e.g. Arduino Uno/Mega) and the Port it is connected to.

After that just compile and upload the firmata sketch.

You than attach a normal webcam using this test script – it uses the analog input 0-3 for the adjustment of the areas where a pupils would be considered as a command for directions (which still have to be confirmed; otherwise the wheelchair would follow all eye movements!).

 

Hardware: to-do list for writing a tutorial 🙂

  • camera: USB or PIcam?
  • Performance did not seem to make a difference; I’ll publish some tests soon…
  • Arduino and potis
  • Arduino and collision detection
  • Confirming the command