Internet of Things Project - iGOPIGO




iGoPiGo - indoor autonomous driving


The goal of the project is to make a raspberry pi toy car navigate indoor on its own. In order to not spend too much time building the actual car, we have purchased a GoPiGo kit from Dexter Industries which includes all the necessary parts and instructions to build one.

In order to tackle the problem of navigation, we are using the ibeacons (Bluetooth low energy devices) which will act as check points. We attach a bluetooth dongle to raspberry pi so it can get the RSSI of the beacons. We also build a temporary track for the gopigo to follow. We are also using a camera attached to raspberry pi which feeds images to a neural network in order to decide where to go on the track.

We also attach a wifi dongle to raspberry pi in order to send the id of the ibeacons to a cloud datastore (firebase) as we travel along the track.

FAQs:
1) Why use a Neural Network for this?
Answer) Because it's cool. Also, this was inspired by ALVINN.

2) Why doesn't it continuously travel?
Answer) We haven't done multi-threading YET.

3) I want to ask/say/suggest something. Where can i do that?
Answer) Under Contact tab, you will find comments section.

TODO:
1) Have both the beacon signal and neural network run in their own threads.
          

Here is a video of a sample run:

Indoor navigation systems


Indoor navigation system is the solution to locate people or places inside a closed space using various kinds of sensory data like accelerometers, ultrasonic sensors, radio waves like Bluetooth, NFC and wifi. The object nodes are located by their tags, smart phones or any other sensory and communications device. Currently there is no standard technology for indoor navigation. Our project combines this radio wave data with artificial neural networks aided by visual cues to make the bot navigate indoors. We have used Bluetooth ibeacons to serve as the object nodes that need to be tracked and navigated to, basic computer vision for lane border detection and artificial neural networks to control the direction.



State of the Art

Artificial neural network is part of machine learning and cognitive sciences and is used for modeling statistical learning. It mimics biological neural networks and learns the various kinds of inputs that could be available to the system and what the responses of the system could be in the future. The neural networks have numerical weights assigned to the different kind of inputs that could become available to the system. The system is thus trained by providing a large number of different inputs of the environment in which the system is expected to function. In our project, we took hundreds of images of the track that we needed the GoPiGo to follow and its surroundings, so that it can learn the response of the different inputs. Neural Networks are being extensively used in the industry nowadays. From speech recognition to self-driving cars, companies are using it extensively.

Computer vision is a field of digital image processing, used to acquire, process and analyze images, generally from the real world and then make logical decision based upon it. It can be used for motion detection, facial recognition, license plate OCR recognition etc. Now-a-days automobiles use computer vision to detect the correct lane in which it is driving and alert the driver in case it happens to veer off the path for some reason. It can be extended to control the motion of the vehicle and keep it in the correct lane. We used basic computer vision in the form of canny edge detection for the bot to distinguish between borders of the track. This also removed unnecessary data that may end up corrupting the training of the bot. The Bluetooth beacons then serves as our check points. The destination is an ibeacon that the bot is supposed to locate and reach.

Architecture (How Everything Fits Together)

Use case scenario

We identified a basic use case for building our bot and demonstrating that the functionality can be extended later for a more complex system. The identified scenario involves the bot to take as input the intended destination from the user and use a defined path to travel to the destination. In our demonstration, we have assumed the motion to be in a straight line. This can later be extended for indoor workspaces having multiple corridors, crossing each other. The bot continuously sends the nearest beacon id to firebase which enables the user to view its location on a website in real time.

Implementation

Github (the purpose of the files and the directions on how to run them can be found in the readme file of the repository and within the files as comments.)


ibeacons

We bought a set of ibeacons (Model O) from Roximity. On raspberry Pi, we interface with the beacons through the hcitool. A nice tutorial is available here that explains how to install all the required libraries and get the RSSI values and other info from the beacons.


Raspberry Pi Car

We bought GoPiGo kit from Dexter Industries. They have nice tutorials on setting it up and also the software is opensource. We used the latest raspberry pi 2. More information about the pi can be found at their website.



Navigation

We built an artificial track for the car and then trained the car to stay on the track. We took 200 images along with their labels(forward, soft_left, soft_right,...). Before feeding those images to a neural network, we pre-processed them using OpenCV as follows (you can find more information about OpenCv here):

  • import them as grayscales.
  • crop the image.
  • resize to (196,98).
  • Canny Edge Detection.

After that we split the data into train and test sets (75/25). Using the tranning set, we train on the images using artificial neural network with 19208 (196x98) input , 2000 hidden nodes and 5 output nodes. The training error went down to 1% in 6 runs and the error rate on test data is somewhere around 25%. More data and playing with the parameters should bring it down. Training was done on our laptop (even the latest Pi doesn't have enough memory for this) and then the model was saved and reused on pi using python pickle library. Following are some example images:

raw processed

After making the actual predictions during the program we have some additional logic on top of the predictions we get from the neural network which are don't go left followed by a right or vice versa since sometimes we noticed that the car was stuck in limbo (going left and right continuously). More training examples should help get rid of this issue.

Right now we are doing sequential programming so everthing happens in a linear fashion. Following diagram shows the flow of the program:




As the car moves along the track, we display its location (closest beacon id) here in real time.

Results


The training error for the neural network went close to 0 in about 10 Epochs. The test error came out to be around 25% which is not bad considering the fact that we only trained it on 145 data points. For the ibeacons, we think placing them 2 meters far apart from each other gives relatively good signal difference with a threshold value of 2 (if we get the strongest signal from a particular beacon 2 times in a row then it is the closest).


Output of a sample run:

pi@raspberrypi ~/Desktop/images $ sudo python project.py 2565
ble thread started
gnumpy: failed to import cudamat. Using npmat instead. No GPU will be used.
/usr/local/lib/python2.7/dist-packages/nolearn/dbn.py:17: UserWarning: 
The nolearn.dbn module will be removed in nolearn 0.6.  If you want to
continue to use this module, please consider copying the code into
your own project.  And take a look at Lasagne and nolearn.lasagne for
a better neural net toolkit.

  """)
Target selected: 2565
Getting Camera Reference
Getting starting position
{'4071': '-84', '908': '-68', '2565': '-71', '1303': '-75'}
{'4071': '-80', '908': '-68', '2565': '-71', '1303': '-74'}
{'2565': '-71', '908': '-64', '4071': '-83', '1303': '-75'}
starting position 908
Getting prediction
Prediction: [1]
{'4071': '-81', '908': '-60', '2565': '-74', '1303': '-73'}
{'2565': '-74', '908': '-61', '4071': '-79', '1303': '-70'}
{'2565': '-75', '908': '-59', '4071': '-80', '1303': '-70'}
Current Beacon: 908
Getting prediction
Prediction: [1]
{'4071': '-83', '908': '-68', '2565': '-75', '1303': '-69'}
{'4071': '-83', '908': '-68', '2565': '-77', '1303': '-65'}
{'4071': '-80', '2565': '-74', '1303': '-73'}
{'2565': '-83', '908': '-69', '4071': '-86', '1303': '-66'}
Current Beacon: 908
Getting prediction
Prediction: [5]
{'4071': '-86', '908': '-69', '2565': '-84', '1303': '-67'}
{'4071': '-80', '908': '-80', '2565': '-74', '1303': '-65'}
{'4071': '-85', '908': '-81', '2565': '-75', '1303': '-69'}
Current Beacon: 1303
Getting prediction
Prediction: [1]
{'4071': '-76', '908': '-76', '2565': '-64', '1303': '-67'}
{'2565': '-79', '908': '-71', '4071': '-80', '1303': '-68'}
{'2565': '-69', '908': '-72', '4071': '-78', '1303': '-67'}
{'2565': '-84', '908': '-85', '4071': '-83', '1303': '-73'}
Current Beacon: 1303
Getting prediction
Prediction: [1]
{'4071': '-74', '908': '-70', '2565': '-69', '1303': '-70'}
{'2565': '-65', '908': '-83', '4071': '-78', '1303': '-68'}
{'4071': '-86', '908': '-70', '2565': '-71', '1303': '-69'}
{'4071': '-75', '908': '-74', '2565': '-69', '1303': '-64'}
Current Beacon: 1303
Getting prediction
Prediction: [1]
{'2565': '-70', '908': '-85', '4071': '-80', '1303': '-67'}
{'4071': '-75', '908': '-74', '2565': '-72', '1303': '-67'}
{'2565': '-69', '908': '-83', '4071': '-75'}
{'4071': '-74', '908': '-83', '2565': '-66', '1303': '-64'}
Current Beacon: 1303
Getting prediction
Prediction: [3]
{'2565': '-74', '908': '-79', '4071': '-74', '1303': '-70'}
{'2565': '-75', '908': '-76', '4071': '-79', '1303': '-68'}
{'2565': '-69', '908': '-74', '4071': '-73', '1303': '-62'}
Current Beacon: 1303
Getting prediction
Prediction: [1]
{'4071': '-72', '908': '-69', '2565': '-71', '1303': '-74'}
{'4071': '-82', '908': '-70', '2565': '-70', '1303': '-75'}
{'4071': '-71', '908': '-70', '2565': '-69', '1303': '-74'}
{'4071': '-72', '908': '-71', '2565': '-64', '1303': '-75'}
Current Beacon: 1303
Getting prediction
Prediction: [3]
{'908': '-75', '2565': '-71', '1303': '-75'}
{'4071': '-86', '908': '-71', '2565': '-66', '1303': '-75'}
{'2565': '-71', '908': '-75', '4071': '-75', '1303': '-81'}
Current Beacon: 2565
Current: 2565 target: 2565
YOU HAVE ARRIVED AT YOUR DESTINATION
          

ibeacons

They are good for proximity data but don't rely on them for judging the distance between devices. The beacon signal fluctuates a lot so the beacons should be placed relatively far from each other in order to get a strong signal from only one beacon at any given time. In short, only use them to find out whether or not you are close to something. Don't use them to find out how close (distance) you are.


Using Neural Networks

They worked pretty well for us even though we did not have a significant amount of data for training. However, if you really want to use them for navigation in a reliable way, then i recommend taking as much training data as possible (the more the better). In addition, you can do much more than just navigation using the pictures taken by the camera (object detection etc). An important point to note is that you will have to generate/train your model on your machine since PI is not powerful enough.

GoPiGo

While GoPiGo gave us a headstart for our project, it is not intented to be a perfect device on its own but a hackable one. The whole point of using neural networks for navigation was so that we can keep it in a staright line since we faced an issue where GoPiGo does not go in a completely straight line.


We believe a lot can be done to improve on our design. This current design can firstly be extended to cover multi-corridor indoor environments. The fact that it was integrated with a webpage, makes it easier for the user to locate the device. The ibeacons can still be used to serve as check points or maybe some other sensor can be used to sense the proximity of the bot to location nodes such as ultrasonic sensor which could overcome the fluctuations of the ibeacons.

For guided navigation, either the neural network algorithm that we developed could be made stronger by providing more training data points or computer vision can be used completely to detect lanes and track borders and then the slope of the track line can be taken as an input and made to control the bot. One could also use a 3D map of the indoor environment and use it to control the navigation of the robot.

In our demonstration, we are not taking the destination input from the webpage, instead we are directly using the command line. This can also be modified by making the webpage more exhaustive and being able to take input from the user.

Additionally, we could also use computer vision and machine learning to perform obstacle detection and make the bot stop if any obstacle is detected. This video feed on which the bot is taking a start or stop decision, could be streamed live to the website for the user to view the path and obstacle. We were initially considering including this in our project as well but limitation on time prevented us from doing so.



Appendix A: Tools and resources used

  • Hardware: GoPiGo, ibeacons, raspberry Pi, wifi dongle, bluetooth dongle, Raspberry pi camera, servo package.
  • Software: Python, OpenCV, Scikit Learn, Numpy, NoLearn, bluez, hcitool.
  • For both hardware and software, the installation instructions can easily be found online.


Appendix B: Software Description

  • Github (the purpose of the files and the directions on how to run them can be found in the readme file of the repository and within the files as comments.)

Ahmad Chatha
Columbia University
ac3877@columbia.edu
Dhruv Kuchhal
Columbia University
ph: 201-208-9972
dk2814@columbia.edu