The Final Run Part I

Our final leg of the competition involved a lot of hard work and critical decisions that were made by the team and cannot be summarised into a single blog post. So I shall split it up into parts to document the process we had so others will not fall into the same trap as we did!

The High level Software and Low level Embedded Architecture

Our system has a slew of sensors: 3 axis gyroscopes, 3 axis magnetometer, 3 axis accelerometers to give orientation and heading, a pressure sensor for depth ranging and two web cams for vision. It previously had two sonars (for depth ranging and obstacle avoidance)  and a flow rate meter for velocity control but we took that out due to the simplicity of the mission parameters we realised later on. The Arduino is in charge of the sensing, actuation(i.e. propulsion of the thrusters) and lastly control systems which help realise our autonomous behaviour. The Raspberry Pi is in charge of high level algorithm control. It performs the more computational and memory intensive work. You can think of it as he’s the big boss of the Arduino who is the supervisor of everybody else except the webcam which is connected to the Pi. Autonomous platforms usually consist of the three above mentioned: sensing, controls and A.I., actuation. The diagrams below shows our software and hardware architecture prior to the final few days leading up to the competition.

Software Architecture

Embedded Hardware Architecture

hardware

The Turning Point

The final lead up to the competition was a crazy week for all of us. Just a week before the competition, it seemed impossible for us to even be able to compete in the competition given that we had not even managed to properly waterproof our hull and were also faced with a major crack in the flange of our hull. Our mechanical design proved to be difficult to waterproof. At that point in time with waterproofing incomplete we could not test our AUV right down to basic control systems because we simply could not risk getting our electronics (mostly bare boards) anywhere near water!  And so that fateful Friday as I recalled it with time and money both against us, I put forth the idea of a withdrawal from the competition rather than going for the competition with a leaky hull and incomplete software which all needed way more than the few days we had left to test and tune. So we sat down together and discussed the cards we had left to play on the table: To withdraw now from the competition and announce the complications we faced as the reason or to continue attempting to waterproof the original hull we had been working on for months on ends and lastly, the boldest and riskiest move of which, to come up with a completely new hull in 3 days on minimum budget as we had used up almost all of our cash reserves. Tune and test our control systems and  A.I. in the remaining 3-4 days we had available.  After much discussion and deliberation, we decided on the third option, which was to start from scratch again and rebuild a new hull on extremely cheap materials. So we we went to IMM to shop for waterproof containers for our Hull and found this at Giant: Lock N’ Lock Tupperware containers.

Conceptual model of CoconutPi Rev2

Conceptual model of CoconutPi Rev2

The bottles represent vertical thrusters. The scrunched up plastic bags are propulsion thrusters. Cylindrical Lock&Lock box is for the camera and the Big one is for the main electronics. It was all to be held together with three lengths of square aluminium profiles. Macgyver style hackish contraptions FTW!

The Countdown

What ensued next was a full day of drilling and hacking to assemble our mechanical body within a single day with sensors integrated. Loads of marine sealant, epoxy putty, sugru and plastic steel epoxy were applied to patch the hull up. The next day we did a lot of waterproofing testing and reapplication of sealants and patching up of the mechanical hull. By the third day it was deemed to be waterproof and we headed down to Eng Wei’s condominium to test out the mechanical aspects such as buoyancy and the righting moment of the ship.

20130226_124348

Testing the waterproof-ness of the ship at my Condominium

This was a bad day for us as it started raining heavily in the afternoon just when we had completely waterproofed the AUV. In addition to that the power supply to the area was turned off. So we decided to abandon the swimming pool tests and just focus on water tank tests at the Acoustics Research Lab. We had lost our 2nd last card on our hands. At that point we were only left with 72 hours to get our Control Systems working and  depth sensors (which were not yet completed) fully working. On top of that we had to integrate the higher level Mission Planning  and Computer Vision into the system which needed time for testing and debugging. The next few days we rushed and survived on very little sleep, loads of coffee and Al Ameen food because we worked late into the night. We faced tonnes of problems surfacing of which we solved issue after issue and mistake after mistake, from accidental short circuiting of the ArduPilotMega, from nearly killing it (on the day of qualifiers) to actually killing it (2 hours before finals!) and managing to replace it with the older version of the ArduPilot just in time.

856999_506127756100847_191668730_o

Our set up working into the night. Continuously we were programming and testing and debugging the autonomous underwater robot at the Acoustics Research Laboratory at NUS.

The Qualifiers

By the morning of the qualifying rounds we had only just managed to finish the control systems and had not started on the Computer vision aspect. The fortunate thing for us was that the qualifying rounds only required the AUV to dive underwater and move forward for 10 metres. This was no tall task as we had already completed the Control Systems which were equipped with nine-axis IMUs which meant that our AUV was completely autonomous in terms of orientation and we could easily specify the direction we wanted our ship to move in. In addition to that the depth pressure sensors were working really well and could have proper depth control on it too. Another benefit we had was due to the fact that our thrusters were self assembled from various RC components and made use of Lithium Polymer batteries and could generate an immense amount of thrust. Something which we tried to reduce so as to provide a wider range of controls and less current draw on the Lipo batteries. It was difficult finding the right motor specification as we had to fit the ducted fan design too since we could not mill out our thrusters, it was better if we bought custom made EDFs. This turned out to be a slight minor advantage for us.

20130303_102508

Close up of our Heave/Dive motors

The speed but current draw penalty we had turned into an advantage as we zoomed through the qualifers which only required us to complete 10m in a straight line. Whilst our AUV could go straight with a yaw controller adjusting for yaw,it had problems going straight perpendicular to the swimming pool wall and would always veer off at an angle.  There was a an inaccuracy or offset in our yaw readings which did not correspond so well to the magnetometer on our smartphones(why the smartphones could even be imprecise). Right before our qualifiers we attempted to calibrate to its initial yaw angle but due to certain problems with the logic or flow of sensor initialization we were unable to fix it in due time, otherwise going straight would not have been an issue. Anyhow, it bought us the precious time we needed because, as of that moment, we had not done a SINGLE test on the higher level A.I and Computer Vision. The night before was spent entirely on Controls and a bit of Computer Vision.

20130302_104700

Testing out the vessel for the first time round during the qualifiers. We had missed the tryouts the day before due to connectivity issues with the Pi and lack of time.

The Final Push

In some sense at that point in time, we were all pretty amazed at how we could have completed so much within such a short period of time. But we still had much work to do on the Computer Vision portion. The qualifiers had bought us time to push ahead with the software. We performed some pressure sensor calibration tests to about 1.7m to fit the competition specifications and the rest of the night was spent on getting our OpenCV right.  By then it was the fourth or fifith day we were on only few hours of sleep a day.

20130302_201023

Testing down to a depth of 1.7m in the ARL’s deepest tank

Our line tracking algorithm was performed in the following stages:

1. Convert the image to grayscale.

2. Perform black colour thresholding on the single grayscale channel.

3. Clean up the image with a series of erosion morphological operations.

4. Compute the Hough Transform to derive lines corresponding to the edges of the black line.

5. With the array of line parameters from the Hough Transform perform Linear regression (minimisation on the Sum of Squared Error on a linear hypothesis) on the data and derive the parameters for the line of best fit.

6. With this set of x and y parameters of the line compute the angle of the line and the amount required to turn to orientate the line straight.

After tweaking the algorithm to instead applying relative yaw control from the Pi to the Arduino we were able to achieve pretty accurate results in following a straight and even curved lines.

Sadly we did not take videos of our successful runs until the dunking incident. That’s it for part I of our final I will be blogging on part II again which will talk about our finals! Stay tuned!

20130301_204808

Going for one of the test runs

885936_506127799434176_1546058569_o

Setting up one of the tougher obstacles of the course in the tank for testing after we had our line tracking successfully done. Sadly due to various delays in the testing and also accidentally allowing water to seep into our ship caused us not to be able to get this portion of the course working.

858851_506127759434180_643684687_o

Opening up our Tupperware Coconut Pi for inspection and fixes on the internal circuitry.

856444_506127752767514_799040126_o

Deadbeat at three am in the morning. We had continuously only 2-3 three hours of sleep for the last week leading up to the competition and camped over night at the water tank in school

External Interrupts on the ArdupilotMega

Coconut Pi uses a microcontroller platform as a bridge between the main controller board, our Raspberry Pi, and the sensors and actuators it has. So of course part of the work and problems it brings itself with would be in the interfacing with the various hardware given the slew of protocols we can utilise i.e. UART, I2C, SPI, PWM and Analog. Our ArdupilotMega (APM), based on the very popular open sourced platform Arduino, is able to interface using these protocols. In fact the sensors on our system makes use of all that I have listed in the following:

1. Two UARTs connected to the Pi, one as a CLI debugging interface and the one as the command channel.

2. Two UARTs connected to two Ultrasonic sensors

3. Magnetometer connected to I2C

4. Pressure sensor connected to I2C

5. Inertial Measurement Unit connected to I2C

6. Two Current and Voltage sensors connected via analog

7.  Flow sensor interfaced via PWM.

Interfacing with the Flow Sensor

Now the Flow sensor is actually the topic for this blog post. Why? Because it was a huge source of frustration. We used these 1/2 inch diameter Flow rate sensors from Seedstudio:flowsensor_LRGInterfacing with it was trivial, the sensor would output pulse width cycles which had a linear relationship with flow rate (L/min). Thus a flow rate of say 1 L/min would generate a 4 Hz cycle and 10 L/min would generate about 60Hz. You can get more information on how it works exactly and the linear graph that it generates here from the wiki.

There are a couple of ways to interface with the sensor with the simplest to understand being  by a constant polling method to check for the occurrence of a pulse. However such a method would waste precious cycles on the microcontroller and is not recommended. A better way would be to use Interrupts be it through External Interrupts or Pin Change Interrupts. These Interrupts are capable of detecting when there is a Low level, rising edge, falling edge, or either at the pin. Thus to detect a single pulse that occurs we can detect when there is a rising edge/falling edge and count the number of pulses that occur each time.

The problem

Of course in a nutshell, I was having trouble getting the External Interrupt to work on the APM. The APM is a highly customised board to suit Autopilot needs and I was lucky enough to find two pins on the board which corresponded to two Interrupts as defined by Arduino on their page . So here is the definition extracted from it:

Board int.0 int.1 int.2 int.3 int.4 int.5
Uno, Ethernet 2 3
Mega2560 2 3 21 20 19 18
Leonardo 3 2 0 1
Due (see below)

Alright so cool, based on that I realised I had GPIO 2 and GPIO 3 which was actually Output Channel 7 and Channel 6 on the APM and I could hook these up to INT0 and INT1. After reading thru the source code and running through quite a bit, I managed to figure out the registers I had to set and procedure for initialising of INT0 with rising edge trigger detection and integrate it into the APM source code which had it all done using avrlibc and not the Arduino libraries which would add overheads into our code definitely. This turned out to be the source of the problem and headache because after tons of debugging, reading and re-reading through the Atmega 2560 datasheet and intense googling I could not figure out how my Interrupt just did not work as expected. It was triggering off like crazy every second. Initially I had a tip off from a friend that it could have been a software instruction such as Timers having had triggered off the external interrupts but after disabling everything I could I still could not solve the problem. ( Even the bits COM3B1 and COm3B0 in TCCR3A to disable OCR3B from affecting the GPIO output).  Here is a sample of the Arduino and AVR code that I used to enable the external interrupt, INT0.

#include <avr/interrupt.h>

volatile int val = 0;

ISR(INT0_vect) {
    rpm();
}

void rpm(void)
{
	val++;
}

void setup()
{
//Initialise the Serial port for debugging
	Serial.begin(115200);
//Store the Status Register SREG in a temporary location
	uint8_t oldSREG = SREG;
	cli();
//Set the control registers to enable external interrupts with rising edge detection
    EICRB = (EICRB & ~((1 << ISC00) | (1 << ISC01))) | (3 << ISC00);
	EIMSK |= (1 << INT0);
	SREG = oldSREG;
	pinMode(2, INPUT); //initializes digital pin 2 as an input
    digitalWrite(2, HIGH);    // Enable pullup resistor
}

void loop()
{

  /* add main program code here */
	Serial.print(val);
	delay(500);
}

The Solution

OLYMPUS DIGITAL CAMERA
After further googling I chanced upon this blog post here which prompted me to double check my pins. I knew that GPIO 2 was connected to PE4 and after checking the datasheet again…I realised PE4 was also INT4 and not INT0! In implemented their attachInterrupt Function Arduino actually jumbled up the Interrupt notations which is extremely annoying. It was my source of frustration for two days. So I switched everything to Interrupt 4 but subsequently the program kept crashing every time I moved spun the flow sensor. I double checked and found out later then that I had not switched the interrupt vector handler and Interrupt 4 was just essentially jumping into null instructions which caused the system to repeatedly restart. Such is the fate of us embedded systems programmers but I just enjoy and get quite a bit of kick from doing all these :), and it worked perfectly fine right after anyway. Later on I realised also that the crazy triggering of the interrupt was caused by the SCL line for I2C. INT0 of the Atmega2560 is also tied to the SCL clock which could have been why Arduino changed the numberings.

Till next update! Btw a little teaser to what we have been working on!

OLYMPUS DIGITAL CAMERA

Opencv Woes

Being a blog about Autonomous Vehicles a bug fix like this should not have been posted. But this bug is something worth blogging about given the amount of headaches it actually gave us and the number of man hours lost just to it. I only managed to resolve the problem after finding a fix on an online forum and so decided to document it here.

Problem

The big problem we faced was getting Opencv on the Raspberry Pi to actually interface with the USB Webcam that we had. Calls to query and capture images from the webcam using Opencv would result in null results but it would somehow work with Motion and Cheese both Linux software for image and video capture. Kenny found an externally compiled binary which could help us capture the image but the data would be written to a location in flash which was a less than optimal solution given reads and write from RAM to memory would occur computational costs in software.

The following python source code with opencv 1 was used in the test runs to capture an image:

import cv2.cv as cv

cap = cv.CaptureFromCAM(-1)
cv.SetCaptureProperty(cap, cv.CV_CAP_PROP_FRAME_HEIGHT, 480)
cv.SetCaptureProperty(cap, cv.CV_CAP_PROP_FRAME_WIDTH, 640)

frame = cv.QueryFrame(cap)
cv.SaveImage('test.jpg', frame)

Attempts to save test.jpg as an image would only result in a null file even though “cap” was initialised as a proper CvCapture Class. A call to cv.QueryFrame() would only result frame holding a null value. Slowly, I realised that the problem was not localised on the Raspberry Pi but also existed on my build of Opencv in Ubuntu 12.10.  The exact same code ran in the same bad fashion.

But after much wracking of brains and googling around on the net I discovered the root of the problem and the solution. The problem was that at compile time, the right software dependencies were not installed completely and one piece of software, libv4l-dev, primarily needed for the linux webcam video drivers were not installed prior to compliation. Linux relied on V4L, Video for Linux, drivers to interface with these webcams. In addition to that, the V4L flag was not enabled with the cmake configuration parameters. This resulted to V4L drivers not being utilised by Opencv which proved disastrous to our cause.

Solution

So after lots of toying around and hopping from blog and site to site, I’d recommend this solution as the optimal one I used for my later builds with a few edits:

1. Follow the instructions on this blog.

2. In the step whereby he configures the cmake config, replace the cmake configuration parameters with the following below:

cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON USE_V4L=ON -D BUILD_NEW_PYTHON_SUPPORT=ON USE_GStreamer=ON ..

Feel free to include more like Qt, OpenGL, Intel TBB for parallel code etc to improve your performance. I used the configuration above for builds on the Raspberry Pi since it does not have a  very powerful processor and is also based off ARM and not Intel. The critical configuration parameter to note is USE_V4L=ON. That sets Opencv to utilise the V4L drivers.

The output to the Python test source code above should look something like this (Look at the final python test.py output without the “Frame 0″) below. It was tested with a Logitech c525 Webcam which worked perfectly. Our other Webcam, a Prolink PCC 5020 would result in persistent “select timeout” errors as can be seen below too:

Opencv

Individual Components testing and development

Our project is current into our the development phase for the individual aspects of the project and this blog will be dedicated to updating on the progress we have covered so far!

If you haven’t seen our introduction blog and want to find out more about us you can read about us here.

Mechanical Design
Our mechanical design is currently into the hull fabrication phase after rounds and rounds of intense discussion with control systems considerations taken into account, electronics interfacing problems and also computer vision dilemmas and sorted out before we finalised on the current design as shown here below in the CAD render.
title

Front view of the 3D Render of our AUV Mechanical Design

The AUV will have a total of four thrusters with two at the side and forward facing in the middle of the AUV and another two thrusters one at the bow and the other at the stern of the vessel facing upwards(seen in the cylinders through the AUV). The bow of the AUV has a hemisphere with a transparent acrylic dome which will house our camera system. The two sonars will be placed in a sonar mount at the bottom of the dome. The AUV will have four degrees of freedom. Forward/reverse and yaw will be controlled by the side thrusters. Heave/dive and pitch will be controlled by the two bow and stern thrusters.

Control Systems
The control system of the AUV will be utilising Proportional Integral Derivative Feedback Control (PID) which is commonly used in various autonomous vehicle applications (our Electronic Speed Controllers use it too which is encapsulated from us and even your home air-conditioner uses it!). Whilst the PID algorithm was made for linear applications, it has proven to be quite responsive to various types of non-linear situations and hopefully it would prove the same underwater where the dynamics are more non-linear in nature due to the drag force being proportional to the squared of the velocity given the drag force will also be the only non-linear term in the dynamics equation for underwater vessels. Anyway this is all in theory, so once the AUV is up and ready, with the four thrusters in place we will be running test runs to tune the Kp, Ki and Kd parameters pertaining to control system stability for the various Degrees of Freedoms we are operating in.
title

Experimenting with the Ardupilot Mega hardware which will perform the autonomous control with its onboard IMU and servo/ESC control outputs
Here the APM 2.0 is hooked up to a portable Digital Signal Oscilloscope Nano 2 from Seed Studio in the debugging of PWM at the output pins

For our AUV, yaw and pitch is corrected based on angle by applying feedback control on the various angle of tilt and calculating the error terms from the accelerometer and gyroscope sensor readings from our Ardupilot Mega(above) which has an onboard Inertial Measurement Unit. The IMU of the Ardupilot Mega contains a three axis gyroscope, a three axis accelerometer and a magnetometer. The more problematic control system to implement would be on the other two degrees of freedom. Forward/Reverse autonomous control will be implemented by feedback control on the displacement of the vessel with its own calculated position since start, based on a dead reckoning algorithm. Lastly, Heave/Dive autonomous control will be implemented by calculating the error terms from the depth sonar sensor in combination with dead reckoning to account for the depths the sensor is unable to detect.

Electronics and Actuation
Selection of thrusters was another difficult issue that we faced. In our first iteration of tests we tested the suitability of Electric Ducted Fan(used in RC park jets) units for underwater applications but they were mostly made for high speed running at above 3000kV (that is 3000 RPM per  volt) and either drew too much current or did not have enough torque to pull the water at that high speed. We also played with different configurations of Electronic Speed Controllers which are made to control brushless DC Motors. If anyone noticed our actuation is actually largely based on RC Electronics and it was inspired by the success of Diydrones (here) and also our supervising professor who does research in artificial intelligence on embedded systems and used RC planes for his work.  Additionally a lot of autonomous vehicles research work have also been based off RC electronics. Apart from that brushless DC motors ran more efficiently and required low maintenance but speed control is generally more complicated than brushed DC motors. But given the benefits of a BLDC motor, coupled with a linear ESC with a variety of configurable functionalities(such as adjusting the timing to “soften” the power of the motor), the use of RC Electronics was a premiere choice.  As a side bonus, the brushless motors were surprisingly (at least to us) waterproof and ran perfectly fine in water after discoverin people who had run their motors underwater for days and testing out on our own (not without fear of short circuiting ourselves).

title

Testing the current draw on the high torque and low speed outrunner motors we purchased

In a most recent test run we managed to fit our 600kV motor onto the EDF unit that we bought after a bit of hacking and modification and fitted it onto our Coconut Pi Mk I, a makeshift floating container solely to test the thrusters out. The motor and ESC combination this time performed way better with better thrust output and powerful speed. Our design requirement for the motor of an underwater AUV were low speed and thus low kV and high torque output on the motor. Similarly for the propeller we aimed for less number of of blades and less amount of torque generated (though the 5 blades still generated quite a bit of torque). We selected these based on the low speed requirement of the AUV to perform the various tasks and power conservation for the AUV to last longer with lesser current draw.

title

Testing the thrusters on the Coconut Pi Mk I in a pool

Depth and Obstacle Detection
The AUV will require Depth and Obstacle Detection too and to achieve that we decided to deploy Sonars. However, sourcing for the right sonar that could function well underwater was another difficult task by itself given the high price range of the deep sea ones(we do not have much a hefty budget for the project) and ineffectiveness of its functionality in swimming pools of depths of 1m – 3m given the common operating depth being in the range of a minimum of 2 metres at least. We came up with other range detection options like laser rangefinders but the high end ones were costly to implement or had bad range underwater. Also the attenuation of lasers underwater was another issue that we would have to handle if we deployed those.

title Our MB7078 in a waterproof housing interfaced with a TTL-RS232 Convertor and connected to an Arduino Uno

So we bought a set of sonars from Maxbotix and calibrated the MB7078 sonar, selected primarily for its IP67 rating (waterproof up to 1m) but tested only in outdoor scenario and not for underwater purposes. We brought the sonar out in a local swimming pool and derived some really interesting results!

title

For starters this was our Proof of Concept that you could use these “cheap” sonars underwater(at least for swimming pool context depth ranging). Next, the plot of data points that we got was a near linear plot of the actual ground distance vs the readings from the sonar, which are already calculated for us using the speed of air but placed underwater. Second by using linear regression, and deriving a line of best fit, the gradient was found to be 4.169 which was close to the theoretical gradient of 4.31 (by dividing speed of sound in water by speed of sound in air both at 30 degrees celsius). Lastly we had an interesting y-intercept of 50 cm which meant that we did not have a direct proportional constant in the transformation. As mentioned above the graph was plotted based on just 4 data points, which may not be accurate enough for our application but as our Arduino got affected by the water after we accidentally dipped it into the water, we discontinued with our experiments for another time. Additionally the sonars were placed close to the surface and there may be surface reflections causing inaccuracies in our readings. Another issue we discovered was the minimum range of these Sonars. The one we tested in the swimming pool had a minimum range of 150cm and that was physically sound given, the minimum range in air being 30cm and since these devices work by measuring the time taken for sound pulses to reflect off obstacles and the hardware limitation is most likely on this measurement. Based on that, the faster speed of sound in water would result in a longer minimum range. So that multipled by 4.3 would give us a theoretical minimum range of 130cm which corresponds closely with our collected range of 150cm.

Computer Vision
The Computer Vision aspect would be the hardest portion and the greatest in determining the successful competition of the various tasks that we have to complete. Currently, our vision system will consist of a USB Webcam (Yes a USB webcam) with 640 x 480 pixels of resolution. Our Computer vision algorithm will be based on the OpenCV libraries from Willow Garage. We are currently still experimenting with a variety of techniques such as Edge detection, Hough Transforms and image segmentation.

title

Our USB Webcam interfaced with a Raspberry Pi

Embedded Software Design
Now comes the really exciting part (okay maybe that’s because I am a Computer Engineer), the embedded software architecture of our AUV. Our main controller board will be based on the Raspberry Pi as we mentioned in our previous blog post which will be running on the stock Raspbian Operating System for now for full compatibility with the hardware. It will be interfaced with another Arduino Mega custom board, the Ardupilot Mega via the serial interface. So one question which may pop up in everyone’s mind would be the decision behind our selection of not one but two controller boards for our application. There is an article here which explains the difference between the two but we decided to incorporate both for more critical reasons than just processing power and power savings. For our AUV, the Raspberry Pi will be responsible for Computer Vision and the more algorithmic and computational tasks whilst our Arduino will handle the actuation and control systems for the AUV. Whilst the Raspberry Pi definitely has the computational power to handle everything, it however does not have the precise timing that a baremetal microcontroller has. The various timing critical processes (like ESC/Servo control) may get preempted due to Linux being a Multi-tasking Operating System and not to mention the Interrupt overheads that may further cause complications for these real-time systems. And this is exactly why we need the Arduino, to handle the low level precise control for our system using a Timer-driven multi-tasking kernel. At the same time we needed the Raspberry Pi for its greater computational power and the benefits of a full fledged multi-tasking operating system with all the goodies of multithreading and also the many libraries for opencv being available to us. The added memory was a limitation also faced by the Arduino which only has 8KB of SRAM which was pretty pathetic for Computer Vision applications.

titleThe Embedded System Architecture of the AUV

The software and hardware architecture is still under development as we test more of our various components and we will leave the details of it out first until our next blog update!

Introducing Coconut Pi

We are a team of Undergraduate Engineers embarking on an exciting journey into the embedded realm of Autonomous Underwater Vehicle Development. The Singapore Autonomous Underwater Vehicle Competition 2013(link) is an Autonomous Systems Competition organised under the IEEE OES Singapore Chapter and we are participating in this International Competition with participants coming from all over the world, much like the AUVSI scene in the US but this is the first time Singapore is doing it. The Singapore AUV Challenge 2013 Taken straight from the website: “The competition is a forum to showcase system-engineering skills in underwater environment for budding engineering students. The competition also aims to create more interests within students in autonomous underwater robots technologies. Each team must have a fully functional autonomous underwater vehicle (AUV). The AUV may follow a marked line on the bottom of pool. There are three major tasks for AUV. The first task is to cross a gate by swimming under it. The second task is to bump a flare mounted ball. And third task is AUV coming out of water at a designated area. The successful completion all the tasks in the shortest possible time decides the winner. Achieving each task will gain points associated to it. Also extra points will be given based on time taken to complete the tasks.”

The Who

Goh Eng Wei
A student in Computer Engineering specializing in Control Systems and Artificial Intelligence. He will be handling the control systems and dead reckoning algorithms.

Doan Viet Tiep

Viet Tiep is a student from Computer Engineering with specialisation in Embedded Systems. He will be handling the Sonar and obstacle detection.

Kenny Tan

Kenny is a student in Computer Engineering with background in embedded systems design and high level programming design. He will be handling the Computer Vision aspect, path recognition and and flare detection algorithms.

Shanmugam Muruga Palaniappan

A student in Electrical Engineering with specialisations in embedded systems. He will be handling the actuator system and electronics interfacing.

Devansh Sharma

A student in Mechanical Engineering with specialization in Automobile Engineering. He will be working on the mechanical design and kinematics of the AUV.

The What

Our team has been kindly sponsored by RS Components in the design, assembly and programming of the fully autonomous system. Given the requirement of the competition we need to create a highly intelligent system that is completely autonomous and this means that there can be no human intervention  able to navigate waters by itself and that is defined as to be line tracking, obstacle detection and object identification.

Our AUV has been “christened” Coconut Pi, after the Raspberrry Pi as our platform will be based on the Raspberry Pi as the central controller. And here is the logo that we designed for it with a bit of geekiness added it :)

title

The Why

This competition for all five of us is one of complete voluntary nature and we have taken it up on top of our own final year academics coupled with our Final Year Projects (this is not our FYP if anyone was wondering) which is usually the last mile for any Engineering student at the National University of Singapore(NUS). To us the project is one of hobbyist nature and we are but a group of undergraduates who are passionate in our respective fields and this multi-disciplinary (and so very cool!) project satiates our thirst to do something with our newly acquired skills in our studies at Engineering!

So follow us on our facebook fanpage at https://www.facebook.com/coconutpi or stay up to date as we post our fortnightly blog updates to Designspark! Our team is also open to suggestions and opinions from the Designspark Engineering community. You can either comment on our blog posts or just scroll through our daily pictures and updates on our facebook page and leave a helpful comment or suggestion for us to consider in our development process!