Tuesday, September 10, 2013

Maker Tool - Laser Cutter Community Project

Sharktooth Laser Cutter - Community Project

Sharktooth Laser - completed milestone


First engraving (yep, unfocused)

Intro

This post describes a build of a laser cutter.  There were many people involved in this build.  My favorite part of the project was meeting new friends, working with really smart individuals, and the support from my school.  This was an extremely fun project.  I am fortunate to have been a part of this project during Summer 2013.  

This post is dedicated to learning, to students (of any type) in peaceful pursuit of knowledge, to teachers (of any type) that share their knowledge, and to makers (of every type) that tirelessly seek to make a positive impact on others.  Thank you.

Organization Contributions:
Area515 Makerspace:  This is where the tool was built and resides, it's an awesome place - https://area515.org/
Simpson College: For my part, I also had support from my schools computer science instructors to participate in the project - http://simpson.edu/2013/09/computer-controlled-laser-cutters/

Personal Contributors:
Like any group, this group project consisted of individuals.  These individuals gave something to make the project happen like time, expertise, money, and/or parts.  With their permission, links for their work are available below:
http://capcart.tumblr.com/ - (Other than a lot of time, the laser cutter's 3d printed exhaust came from this individual)


Project Definition

What is the Sharktooth Laser?

The Sharktooth Laser is the Area 515 makerspace build of the Blacktooth Laser.

The Blacktooth Laser is a laser cutter project originating from:

http://buildyourcnc.com 
http://blacktoothlaser.blogspot.com

The Sharktooth laser was acquired by an Area 515 member as mostly a kit (not all parts included).  The build is documented in some places but it’s not as easy as completing everything step by step.  This is especially true if you are doing something new because of choice or lack of available parts.  
I started with the project on day one with a few people.  I had no intentions of working on it but I was around when the laser started to get constructed.  When it was explained to me how this thing works, I could see how a community tool like this could enable others to build what they wanted.  A tool like this can be one of many tools that enable others to bring things from their imagination to the physical. 

Who is Building it?

People who were interested in the project or just happened to be around while the project was getting built. 

Got it, but what is it?

  • It’s a laser cutter designed to be low cost and easy to use 
  • 40W CO2 laser tube
  • 24” x 20” (609mm x 508mm - for the rest of the world)
  • Our implementation uses free and open source software

What’s it do?

The laser cutter cuts or engraves material.

Why pick this project?

  • It is a unique community tool – free to use (after safety training)
  • It is a community project (and a laser)
  • Not everyone knows everything but together we can learn to figure it out
  • Use what I learned at college to benefit others

What I learned:

  • Operating systems – setup and understanding the hardware abstraction layer
  • Computer hardware
  • Stepper motors/drivers
  • Programming, troubleshooting, rtos, soldering (lots), mechanical, etc.
  • Teamwork – coordinating and working together with no set schedule

The Build

Overview

Make the table
Case, stepper motors, and power supplies
Breakout board and wire-up
Computer / Operating system
Intermission – testing everything but the laser
Laser tube, air, water, nozzle, mirrors
Use an out of step process

This is a big overview.  We did this build in two phases.  First, get all of the X and Y movement worked out and get LinuxCNC up.  Second, do everything with the laser.  This is a great variation from other project builds.  We tried to solve all of our problems before mounting the laser tube.

Wiring overview:

All of these pictures came from http://blacktoothlaser.blogspot.com

 1.This is how the wiring went.  We grouped everything related to it’s power supply.


2.First, we did all wiring related to the stepper motors and their power.

3.Next, we did all of the wiring related to the computer and breakout board

4.Lastly, we wired up everything needed for the laser.


First week pictures:

This is probably after the first week.  For the most part the basic components are visible. 

Second week pictures:

It looks good but a major drawback is we had to rewire everything many times.  This is because we were following some instruction then others… etc.

Setting Up the Computer

We needed a computer to run LinuxCNC.  We used what we had around the space to put a computer together.   
  • Pentium 4, 3GB RAM, 500GB HDD, ~350W PSU, DB-25 port needed
  • Pieced together motherboard/case, psu, hdd, and wifi (wrt54g/dd-wrt)
  • Linux CNC install
  • Parallel Cable (DB25) connects LinuxCNC to laser (breakout board)

After we have the computer installed we went through a lot of debate and trials to get the stepper motors configured.  It was the funniest part of the project next to aligning the mirrors. The next slide is a video showing the first movement achieved. 

Video: First control Demo

Up and down in this video is our y-axis and left to right is the x-axis (as expected). 

Sharktooth - First Movement

Mechanically Sound - onto the Laser

Mount laser tube
With all of the moving parts and software in place, we decided to tackle everything that was going to be put on the laser power supply.  

Secure water, air, mirrors
3d printed part
This is the 3d printed part we used.  It is very solid and dense.  This contribution was made by one of the people at the space that builds 3d printers.  As noted above, some of this individuals work can be found at: http://capcart.tumblr.com/.  The part is 5% too awesome. (inside joke).
3d printed exhaust


Laser assembled
This is everything put together for the laser.  There are three mirrors.  There is air being pumped into the nozzle.  There is water being pumped into the laser tube.  There is a high voltage line (and gnd) attached to the tube.  Not visible but there is a lens inside of the nozzle.
Laser tube


Breakout board (Bob) side of lasercutter assembled.  Along the same time as the previous picture, this is what the other side looks like.

Assembled (other side)


Assembled (other side)

Putting it together:

The big goal is to take an image and burn it into wood.  We do this by taking an image, detect the edges, then convert the positional data of those pixels in the edge to G-code for the laser.  To do this, we need to do the following two things:
        
Convert an image file to usable G-code
  • Detect Edges
  • Trace Paths
  • Generate G-code, clean it up, save it
Run the laser cutter
  • Setup material
  • Load ngc file
  • Home x and y axis
  • Go! 

ImgToG - Image to G-code

We need to convert an image to G-code   We use inkscape to get the edges, then we get the path of those edges, and finally we convert the path to an ngc file (G-code).  

Inkscape is awesome but it doesn't translate an image to G-code by itself.  Someone already wrote a plugin for inkscape to do this.  Thankfully for use the tool is THLaser Plugin and it is available here: http://wiki.thinkhaus.org/index.php?title=THLaser_Plugin .  The plugin just works... what more could you ask for?
Initially, before we figured out how to use inkscape, I made an edge detection program for pictures using the opencv library.  This method is good for some pictures when inkscape wont or cant get the contours/edges but this is rare.  If inkscape can't get the edges, it usually has to do more with the image.  If you are interested in how this is done with opencv, the stuff used was from previous work with opencv: http://techvalleyprojects.blogspot.com/2013/06/opencv-canny-edge-finding-contours-and.html
  

Two ways to get the edges:
  • Inkscape – Filter -> Edge Detection 
  • OpenCV Edge Detection
  • Used for when Inkscape won’t work
  • Source

Next use inkscape
  • Get the paths (trace bitmap)
  • Convert to ngc file (thlaser plugin)

Demo TIme

We reached the best part,…  demo time.  In the next segment, there are two videos.  The first video is how we convert the image to G-code and the second video is the G-code being used by the machine.


Demo of ImgToG



Demo of operation

Post-Build

Safety always - We require the use of safety glasses, rated for the laser, while the laser is in use.  The lid to the laser cutter is always shut when the laser is plugged in.  

Documentation - We made an Operating Manual and a running human readable log file on the desktop of the computer.  

Thank you’s - This should be obvious...  After it got built, to a usable point, we reached a milestone.  At this milestone, I had to say thanks to everyone I worked with an met.    

Controlling the laser output through software - Other extremely smart individuals made contributions to control the laser output through software.  We can control the laser output by setting the spindle speed on the LinuxCNC AXIS GUI.

Future improvements: Be able to change the laser output while cutting. This would allow for the laser to burn raster images (like a dot-matrix printer).

Lessons Learned

Get more done with teamwork.

I learned a lot from everyone, that was one of the best parts.  Also, it is really really cool to come back to the project only to find that the things you were going to work on have already been done.  

Teamwork projects requires consistent forward momentum

This may sound strange but a teamwork project needs people to maintain momentum.  At times I helped maintain momentum and at other times, it was someone else.  

Communication required

Loose-knit groups need easily accessible communication channels 

Get informed to find the best solution in a group

Ask everyone for input

No replacement for action

Something is better than nothing, even if it's not perfect.  Some implementation is better than no implementation

Ending

All in all, this has to be one of my favorite projects.  While making a laser is obviously cool, the people I met was the best part.   Thanks.

Shortly after having it up and running, this is some of what people did:

Other than the first run, this is my favorite engraving

Styrofoam

Styrofoam

The magazine shows the relative size of the rectangle cut



Wednesday, July 10, 2013

The Unpublished


Summary

     Here's some stuff that never got a their own project page.  Anything I have on the projects is posted/shared below (source, images, notes, etc).  Use it, make it your own, etc.  If you want additional information on something or if I've left something out, let me know.

Content Sections

  1. The Movie App - A jQuery Mobile Demo
  2. Staring Competition With a Robot - A Face Following Robot Arm
  3. Styrofoam Thunder - A Drone Plane
  4. The World According to a Robot - A IOIO Rover
  5. Marge the Marbot - a DIY Air Boat

1. The Movie App - A jQuery Mobile Demo

January 2013

     The application was written to do some client-side only practice targeting mobile devices.  The application uses movie research as the subject.  
jQuery mobile.  That should say it all.  Who wouldn't want to write an application that just works. 
In all seriousness though, try the site on your mobile device(s) and on your computer's browser.  The project/site was a quick proof of concept.  

For the source, check out: https://github.com/ergobot/MobileMovieDemo

2. Staring Competition With a Robot - A Face Following Robot Arm

December 2012

     This project was a collision of two projects.  These two projects were being done at the same time.

     The first project was the robot arm, controlled by an Arduino duemillanova with a ladyada motor shield.  I should have made hard limits to the min/max range of the arm but went with soft limits for min/max range of motion.  This resulted in an arm that would be controlled by sending characters to the arduino.  I was happy with that.  

     The second project was messing around with computer vision libraries.  I was working with C# .NET projects.  I dug into an article on emgucv and motion JPEG streaming.  The primary goal was to just recognize the area around where a face was in an image (rough approximation was fine).  I wanted to do more with this later and this was just a start.  The second goal was to get a way to stream video to me on a local network so I could see what was going on in the front desk.  During this time at work, I would be the last person in the office and didn't want to be interrupted by physically checking who was at the door.   

     Both projects were completed close to each other.  It wasn't my intention for the projects to get combined.  Was at the maker space and a cheap webcam ended up taped to the arm (see pictures).  From here, it logically progressed.  The idea is that we will draw a rectangle around the first recognized face.  If  the center of the picture is not in the rectangle, move the arm until the center is in the rectangle.  This resulted in a really funny robot arm that is always looking a people. As a bonus, you can see what it is seeing in a browser (that was the Motion JPEG part).    

     Back at work, we tried it out on ourselves.  We never had the guy watch for visitors.  Admittedly, it is a little strange to know and see that a camera is following me.  It should probably look somewhere else unless it is interacting with me.    

     Why didn't it get published?  There wasn't a good demo to give for the project.  Other than privacy issues, I gave the Arduino used in this project to a friend as a gift.  

Links for stuff used:
Project source: ( 7/13/2013 EDIT: coming soon)

Robot Arm links:
ladyada motor shield v1: http://learn.adafruit.com/adafruit-motor-shield (fyi - v2 is out and is awesome)
Robot Arm - OWI Robotic Arm Edge: http://www.adafruit.com/products/548

Software side links

Project Source:
Github: 

Images:
2.1 - First Side

2.2 - Second Side (1)

2.2 - Second Side (2)


3. Styrofoam Thunder - A Drone Plane

May 2013

     This one is straightforward.  A friend wanted his RC plane to take-off by itself and then take control of the plane when it had reached a certain altitude.  He had the plane and I got a hold of an Ardupilot Mega v2.5.  

     We took what we know and went to the local hobby shop to put it together.  It was probably a little different than anyone was used to.  They have great quadcopter expertise which helped a lot.  In good time, we put it all together then headed out to an open area to try it out.  Plain and simple, it never got airborne.  After all of the setup, during the last double-check (I do a lot of those), a wire had corroded from on of the bullet connectors.  Most people would be bummed out to get that close and not have it go.  This was not the case for me.  We had gotten a lot farther than I thought within a short amount of time.  My only interest was getting the plane to fly itself.  I thought we got really really close to that.  It was a great learning experience for everyone and I'm sure we'll return to it soon.   

Ardupilot Mega: http://store.3drobotics.com/products/apm-2-5-kit
Arduplane project: http://plane.ardupilot.com/


Images:
Figure 3.1 - Belly of the brid

Figure 3.2 - Ardupilot Mega 2.5 (APM)

Figure 3.3 - APM and radio

Figure 3.4 - Bad connection circled in black


4. The World According to a Robot- A IOIO Rover

April 2013

     This project was to demonstrate a use for the short distance algorithm.  The hardware involved was: ioio mint, arduino micro, rover 5 platform, an Android phone, the HMC6352 compass module, and the TB6612FNG 1A Motor Driver, and a sharp ir sensor (with servos to let the ir sensor sweep in an async manner).  The idea was to have rover move forward, while recording its location, until it detected something in front of it.  When there is an obstacle, turn until there isn't an obstacle, and continue moving forward.  

     According to the rover, it always starts itself at "0,0".  While performing basic obstacle avoidance, the rover would take measurements from the compass module (bearing) and record the ticks (quadrature encoder).  Thus, the rover would avoid stuff, and be able to record where it was going (indoor mapping).  Everyone turning point becomes a node and the number of ticks between the nodes becomes the weight.  In this way, the rover could determine the quickest path back to a node or home.  

    As a bonus, the rover would send the information from the ioio mint back to the Android device and the Android would draw its updated path on the screen (surface).

Why didn't it get published?  It was rushed, plain and simple.  I had to find a way to get the ioio mint listen for the quadrature encoding's tick (interrupts without interrupts).  I ended up modifying part of the ioio library to get it working.  The wiring of all components looks unkempt.  All in all it was working after a rough weekend. 

Source:
IOIO tutorial for easy TB6612FNG 1A Motor Driver control:
http://techvalleyprojects.blogspot.com/2013/04/android-ioio-motor-control-tb6612fng.html

IOIO motor driver source:
https://github.com/ergobot/ioioMotorDriver

IOIO tutorial to listen for digital input:
http://techvalleyprojects.blogspot.com/2013/05/android-ioio-listen-for-digital-input.html

IOIOLib modified to listen to quadrature encoder ticks:
https://github.com/ergobot/IOIOLib

Images:
Figure 4.1 - Front view (Hello World)

Figure 4.2 - Top view

5. Marge the Marbot - a DIY Air Boat

August 2012

     This was an usually fun project.  It started with a couple of toy boats.  A friend and I wanted a better boat.  We ended up in the clearance aisle of a large store and bought a couple of flotation devices.  These flotation devices are also called body boards.  We took two of the legs, two of the motors, two of the electronic speed controls, the power distribution board, and the lipo from my quad copter (a diy drones 3dr kit).  We added an Arduino Duemilanove and a radio for the Arduino.  For control, we used a laptop, a radio, and an xbox 360 controller.  We mounted the legs to a plastic box and put all of the electronics inside.  Finally, we secured the box to the flotation device.

     The whole thing worked great and we went through several improvements quickly.  Most of these improvements centered around using a different radio.  The first radio was a regular RC radio that is used for RC planes.  This worked fine but we couldn't change much on it and we wanted to be able to send/receive data from the vehicle.  The next radio we used was a bluetooth (bluesmirf gold).  Yes, obviously our range was limited but everything worked great.  Finally we got a hold of two Xbee Pro Series 1.  We got the vehicle going farther than we could visibly see it.  It was awesome.

     The next logical step would be to change vehicle.  We needed to change where the motors were and how the vehicle was balanced.  The vehicle was making wake on a calm body of water at only about 25% of it's available full throttle.  We only had one mishap that flipped the boat.  The mishap got everything wet (the box filled up with water).  After drying out for a few days, everything worked fine.  After getting it to go far, everyone had enough fun at that point so we stopped.  

     The result was an air boat that could turn in place, travel approximately one mile (maybe less but it would be close), send/receive data approximately one mile, and make wake on a calm body of water. 
The whole thing was controlled by an Xbox 360 controller and a little laptop.

     The pictures below show the project before and during the first run.  We called her Marge or the Marbot.  You'll probably notice in the pictures that the front is rather blunt.  She was pushing the water on the first run.  Later we would shape the board by cutting and covering the entire board in tape.  After the change, she would neatly cut into the water instead of front digging in to push the water.   

Source:
The exact Arduino Sketch used to control the motor/esc's can be found on a previous post:
http://techvalleyprojects.blogspot.com/2012/10/arduino-control-escmotor-arduino-code.html
Images:
      
Figure 5.1

Figure 5.2

Figure 5.3

Figure 5.4

 









Friday, June 14, 2013

OpenCV - Canny Edge, Finding Contours, and Blending example

Intro

This is my first OpenCV example.  I want to learn more about how to manipulate images or data in images.  As with almost all projects, it is easiest to learn through application.  Through small amounts of searching, I was directed to OpenCV.  The whole project has been fun.

// For those of you who want to skip to the code section, scroll down to the last section named, "Code Section"

Goal

The goal is to gain some very basic familiarity with OpenCV, its types, and its setup and demonstrate this by combining several tutorials/examples.  As a source of input, I used a video file.  For each frame, we will make two copies of the frame.  In one frame, we will use the Canny Edge tutorial (listed below) in order to find edges in the frame.  In the other frame, we will use the Finding Contours in Your Image tutorial (listed below) in order to find contours in the frame.  After we have manipulated the two frames, we will blend them together (add both frames to a new frame) and present this blended frame to the user as the output.  

Hiccups

There were some minor problems encountered when setting up OpenCV and also getting my first programs running.  While I said that the whole project was fun, sometimes it was fun yet frustrating (funstrating?).  Fortunately, all problems that I encountered have been solved before by others and posted online.  I would post the issues, but they were all minor environment related issues.    

The Other Section 

I am using Ubuntu 13.04 on an i3.  Everything was written using gedit.  As the source for my video, a Mass Effect 3 trailer was used.  It was downloaded from YouTube using the Firefox add-on "Video DownloadHelper".

Mass Effect 3 Trailer
http://www.youtube.com/watch?v=eBktyyaV9LY&feature=share&list=PLBF461FAE966EAF93

Video DownloadHelper
https://addons.mozilla.org/en-US/firefox/addon/video-downloadhelper/


Reapers...

Its everyone's favorite time, demo time!  Below are the four versions for demo: the original (from bioware), canny edges, contour detection, and blended version (my favorite).

Original (from Bioware)


Canny Edges



Contour Detection



Blended (finished product)



Corrections?

This isn't a tutorial, it's an example of examples for the purpose of learning.  If there is something that is incorrect or could use improvement, let me know.  Thanks!

All Sources Used

There were many source and examples followed (listed below) in order to get to this point.  Thank you to anyone involved in making these.

OpenCV tutorials
http://docs.opencv.org/doc/tutorials/tutorials.html

Installation in Linux
http://docs.opencv.org/doc/tutorials/introduction/linux_install/linux_install.html#linux-installation

Using OpenCV with gcc and CMake
http://docs.opencv.org/doc/tutorials/introduction/linux_gcc_cmake/linux_gcc_cmake.html#linux-gcc-usage

Learning OpenCV: Computer Vision with the OpenCV Library
     by Gary Bradski and Adrian Kaehler
     Published by O'Reilly Media, October 3, 2008
   AVAILABLE AT:
     http://www.amazon.com/Learning-OpenCV-Computer-Vision-Library/dp/0596516134
     Or: http://oreilly.com/catalog/9780596516130/
     ISBN-10: 0596516134 or: ISBN-13: 978-0596516130

OpenCV tutorials
http://docs.opencv.org/doc/tutorials/tutorials.html

Canny Edge Detector
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html#canny-detector

Finding Contours in your Image  
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html#find-contours

Adding (blending) two images using OpenCV
http://docs.opencv.org/doc/tutorials/core/adding_images/adding_images.html

Code Section

To use, pass in the name of the video file as an argument. (Example:  ./combinedDemo me3.mp4 )

 /**  
  * @file combinedDemo.cpp  
  * @date 14 June, 2013  
  * @author S. O'Bryan ( based on the sources below )  
  * @brief Created for learning purposes. Combining the tutorials and examples from the below sources to find edges and contours in an image    
  * and blend the two images together into one output image. Image source is a video file.   
  * @sources   
  * Learning OpenCV: Computer Vision with the OpenCV Library  
    by Gary Bradski and Adrian Kaehler  
    Published by O'Reilly Media, October 3, 2008   
   
   AVAILABLE AT:   
    http://www.amazon.com/Learning-OpenCV-Computer-Vision-Library/dp/0596516134  
    Or: http://oreilly.com/catalog/9780596516130/  
    ISBN-10: 0596516134 or: ISBN-13: 978-0596516130    
        
   OpenCV tutorials (OpenCV 2.4.5)  
   http://docs.opencv.org/doc/tutorials/tutorials.html  
   
   Canny Edge Detector  
   http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html#canny-detector  
   
   Finding Contours in your Image    
   http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html#find-contours  
   
   Adding (blending) two images using OpenCV  
   http://docs.opencv.org/doc/tutorials/core/adding_images/adding_images.html  
   
  */  
 #include "opencv2/objdetect/objdetect.hpp"  
 #include "opencv2/highgui/highgui.hpp"  
 #include "opencv2/imgproc/imgproc.hpp"  
 #include "opencv2/core/core.hpp"  
   
 #include "opencv2/highgui/highgui_c.h"  
   
 #include <iostream>  
 #include <stdio.h>  
   
 using namespace std;  
 using namespace cv;  
   
 /** All Function Headers */  
 void CannyThreshold(int, void*);  
 void thresh_callback(int, void* );  
   
 /// Global variables  
 char* target_window = "Output";  
   
 /// Edge detection globals  
 Mat srcEdge, srcEdge_gray;  
 Mat dstEdge, detected_edges;  
   
 int edgeThresh = 1;  
 int lowThreshold;  
 int const max_lowThreshold = 100;  
 int ratio = 3;  
 int kernel_size = 3;  
   
   
 /// contour detection globals  
 Mat srcContour; Mat srcContour_gray, drawing;  
 int thresh = 100;  
 int max_thresh = 255;  
 RNG rng(12345);  
   
 /// blend globals  
 Mat output;  
 double alpha = 0.5;   
 double beta = ( 1.0 - alpha );  
   
   
 /**  
  * @function main  
  */  
 int main( int argc, char** argv ) {   
     
   namedWindow( target_window, CV_WINDOW_AUTOSIZE );  
   CvCapture* capture = cvCreateFileCapture( argv[1] );  
   IplImage* frame;  
   
   while(1) {  
     frame = cvQueryFrame( capture );  
        
      // convert frame to Mat object used for edge detection  
      srcEdge = cv::cvarrToMat(frame);  
        
      // convert frame to Mat object used for contour detection  
      srcContour = cv::cvarrToMat(frame);  
        
      // frame check  
     if( !frame ) break;  
   
      /// contour detection  
      /// Convert image to gray and blur it  
      cvtColor( srcContour, srcContour_gray, CV_BGR2GRAY );  
       blur( srcContour_gray, srcContour_gray, Size(3,3) );  
       createTrackbar( " Canny thresh:", "Source", &thresh, max_thresh, thresh_callback );  
       thresh_callback( 0, 0 );  
   
   
      /// edge detection  
      /// Create a matrix of the same type and size as src (for dst)  
       dstEdge.create( srcEdge.size(), srcEdge.type() );  
   
       /// Convert the image to grayscale  
       cvtColor( srcEdge, srcEdge_gray, CV_BGR2GRAY );  
   
       /// Show the image  
       CannyThreshold(0, 0);      
    
      // we have two Mat objects to blend - dstEdge and drawing  
       addWeighted( dstEdge, alpha, drawing, beta, 0.0, output);  
   
      // Show the final output       
      imshow( target_window, output );  
   
      // esc for escape  
      char c = cvWaitKey(33);  
     if( c == 27 ) break;  
   }  
   cvReleaseCapture( &capture );  
   cvDestroyWindow( target_window );  
 }  
   
 /** @function thresh_callback */  
 void thresh_callback(int, void* )  
 {  
  Mat canny_output;  
  vector<vector<Point> > contours;  
  vector<Vec4i> hierarchy;  
   
  /// Detect edges using canny  
  Canny( srcContour_gray, canny_output, thresh, thresh*2, 3 );  
  /// Find contours  
  findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );  
   
  /// Draw contours  
  drawing = Mat::zeros( canny_output.size(), CV_8UC3 );  
  for( int i = 0; i< contours.size(); i++ )  
    {  
     Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );  
     drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );  
    }  
 }  
   
 /**  
  * @function CannyThreshold  
  * @brief Trackbar callback - Canny thresholds input with a ratio 1:3  
  */  
   
 void CannyThreshold(int, void*)  
 {  
  /// Reduce noise with a kernel 3x3  
  blur( srcEdge_gray, detected_edges, Size(3,3) );  
   
  /// Canny detector - number has replaced lowThreshold  
  Canny( detected_edges, detected_edges, 50, lowThreshold*ratio, kernel_size );  
   
  /// Using Canny's output as a mask, we display our result  
  dstEdge = Scalar::all(0);  
   
  srcEdge.copyTo( dstEdge, detected_edges);  
   
  }  
   


Wednesday, May 15, 2013

Android - An example application

Summary

This article describes the android application, "CyNotify".  The application was made for learning, enjoy.  The source and application are available to use in any way that you want.  Use it, take it, learn from it, make it your own... etc..

Before we begin

The source for the application is available at:
https://github.com/ergobot/CyNotify

The application is available in the google play store at:
https://play.google.com/store/apps/details?id=edu.simpson.obryan.projects

What can I learn from the source?

  • Read SMS
  • Read phone contacts
  • Make notifications
  • Create Alarms (instead of using timers)
  • Cancel Alarms
  • Use Intent Service

 

What does it do?

The application reminds you about missed calls or missed text messages (SMS) at specific intervals. 

Example scenario

For example, you miss a call while in a meeting.  You see that you missed a call and think, "I really need to return that call but I'm in the middle of this meeting.  I'll get back to the missed call in 15 minutes."  Before you know it, an hour has passed and you forgot to return that call. 

Android only notifies you once that you have a missed call or new text message.

With CyNotify enabled, you'll always be reminded regularly when you forget about those missed calls or new text messages.

How does it work?

In short, we are setting system alarms and running an IntentService when the system alarms go off.  In the IntentService, we check for missed calls/new text messages and finanlly make notifications (if needed) to the user.

More specifically...
  • The application interacts with the user through one Activity.
  • There are two broadcast receivers to listen for device boot completed and system alarms (AlarmManager).
  • There are two IntentServices (fire and forget services) to do the work of setting an alarm after the device boot as well as performing the message changes/creating notifications

How did you make the icons? 

I used Inkscape, its free and works great.  http://inkscape.org/

The tutorial where I learned to use Inkscape to make android icons is:
http://tekeye.biz/2012/android-launcher-icons-using-inkscape

(Gimp is another good choice, http://www.gimp.org/ .  It is free and there is plenty of documentation. )

Take the icon you made and go to http://android-ui-utils.googlecode.com/hg/asset-studio/dist/icons-launcher.html .  Upload the icon you made in Inkscape (or gimp) and you will get all of the different sizes of icons you need. 


That is it!  This was a fun application to make and I learned a lot.  I hope the experience can help you. 



Monday, May 13, 2013

Android IOIO - Listen for Digital Input

Summary
This tutorial shows how to slightly modify the ioio lib in order to add a listener for digital input.
Description
At the very least, you're reading this because you want to do something when the value of your digital input changes.  This might even mean that you've tried to listen for the digital input changes by creating a new thread and use the blocking call Digitalinput.waitForValue().  This is one way but we can also go into the IOIOLib and add a listener.  Adding this listener to the pin is what this article is all about. 

Before we begin
This tutorial assumes you are somewhat familiar with your ioio, java, and android.   How familiar do you have to be to use the tutorial without problems?  Work with the ioio examples first or be able to read your targeted digital input before attempting this tutorial.

Steps

As an overview, we are going to make the following changes/additions to the IOIOLib:
  1. Add the abstract class, "Listener.java", to the package "ioio.lib.impl"
  2. Modify the abstract class, "DigitalInput.java", in the package "ioio.lib.api".  Adding the the abstract methods for the Listener.
  3. Modify the class, "DigitalInputImpl.java", in the package "ioio.lib.impl".  Add the implementation for the Listener, and call  "firePropertyChanged" at the start of "setValue"
  4. Create your DigitalInput in the Looper class
  5. Add the listener in the setup method of your looper class 

Step One



Add the abstract class, "Listener.java" to the package "ioio.lib.impl".
Create a new class named "Listener.java" inside of the package "ioio.lib.impl". 
In this class delete everything and the following:

package ioio.lib.impl;

import java.util.Collections;
import java.util.Set;
import java.util.TreeSet;


public abstract class Listener {

    private final Set<String> properties;

    public Listener(String... properties) {
        Collections.addAll(this.properties = new TreeSet<String>(), properties);
    }

    protected final Set<String> getProperties() {
        return this.properties;
    }

    public abstract <T> void propertyChanged(final String property,
            final T oldValue, final T newValue);
}

See it on github at:
https://github.com/ergobot/IOIOLib/blob/master/src/ioio/lib/impl/Listener.java



Step Two
Modify the abstract class, "DigitalInput.java", in the package "ioio.lib.api".  Adding the the abstract methods for the Listener.

To do this, find the file "DigitalInput.java" in the package "ioio.lib.api" and add the following (inside the class):
     // Custom
    public boolean addListener(final Listener x);
    public boolean removeListener(final Listener x);

See it on github at (lines 129 and 130):
https://github.com/ergobot/IOIOLib/blob/master/src/ioio/lib/api/DigitalInput.java



Step Three
Modify the class, "DigitalInputImpl.java", in the package "ioio.lib.impl".  Add the implementation for the Listener, and call  "firePropertyChanged" at the start of "setValue".

Find the class "DigitalInputImpl.java" in the package "ioio.lib.impl" and add the following (inside the class):

    private final List<Listener> listeners = new LinkedList<Listener>();

    protected final <T> void firePropertyChanged(final String property,
            final T oldValue, final T newValue) {
        assert(property != null);
        if((oldValue != null && oldValue.equals(newValue))
                || (oldValue == null && newValue == null))
            return;
        for(final Listener listener : this.listeners) {
            try {
                if(listener.getProperties().contains(property))
                    //System.out.println("property changed");
                    listener.propertyChanged(property, oldValue, newValue);
            } catch(Exception ex) {
             System.out.println(ex.getMessage());
                // log these, to help debugging
                ex.printStackTrace();
            }
        }
    }

    @Override
    synchronized public final boolean addListener(final Listener x) {
        if(x == null) return false;
        return this.listeners.add(x);
    }

    @Override
    synchronized public final boolean removeListener(final Listener x) {
        return this.listeners.remove(x);
    }

See it on github at: https://github.com/ergobot/IOIOLib/blob/master/src/ioio/lib/impl/DigitalInputImpl.java  (lines 107 through 135)

In this same class add the following inside of the method "setValue":

firePropertyChanged("value",value_,value);
See it on github at: https://github.com/ergobot/IOIOLib/blob/master/src/ioio/lib/impl/DigitalInputImpl.java  (line 55)



Step Four
Create your DigitalInput in the Looper class

I'm using the HelloIOIO project as the example.  Find where your looper class starts, and declare a DigitalInput named "exampleInput".  For this example, it is directly under the line "private DigitalOuput led_;".  The line we are adding looks like this:

private DigitalInput exampleInput;

See it on github at:
https://github.com/ergobot/HelloIOIO/blob/master/src/ioio/examples/hello/MainActivity.java
(line 47)



Step Five
Add the listener in the setup method of the looper class 

Note:  For the example, we are using pin 47.

Find your setup method in the looper class, and add the following:

            // Our digital input (the pin being used for this example is pin #47)
            exampleInput = ioio_.openDigitalInput(47);
           
            exampleInput.addListener(new ioio.lib.impl.Listener("value") {
                public <T> void propertyChanged(final String p, final T oldValue,
                final T newValue) {
                                           
                        // This is where you do what you want with the old or new value
                   
                        // Write to the logcat
                        Log.v("exampleInput", "exampleInput - " + System.nanoTime() +"  oldValue = " + ((Boolean) oldValue ? 0:1) + " : newValue = " + newValue);
               
                        // or another way to write it... print it out
                        System.out.println(p + " changed: " + oldValue + " to "    + newValue);
               
                }
                });

See it on github at: https://github.com/ergobot/HelloIOIO/blob/master/src/ioio/examples/hello/MainActivity.java (lines 64 to 79)


...and there it is!  Hope you helps you.  When searching, there wasn't a way to do this.  It may not be the best but take it and make it your own.  If you know of a better way, put it in the comments. 



Resources:

Original idea:
https://groups.google.com/d/msg/ioio-users/aROyaAVOhAQ/UVQHx5Lmh6kJ

Listener based on stackoverflow.com answer:
http://stackoverflow.com/questions/2822901/watching-a-variable-for-changes-without-polling

Eclipse IDE (from the adt-bundle)
     link - http://developer.android.com/sdk/index.html

HelloIOIO
     original - https://github.com/ytai/ioio/tree/master/software/applications/HelloIOIO
     modified -  https://github.com/ergobot/HelloIOIO

IOIOLib
     original - https://github.com/ytai/ioio/tree/master/software/IOIOLib 
     modified - https://github.com/ergobot/IOIOLib

IOIO Mint
     adafruit - http://www.adafruit.com/products/885


Monday, April 8, 2013

Android IOIO - motor control TB6612FNG


Summary
This is an Android example on how to connect a ioio mint, the motor driver TB6612FNG, and two motors.

Shortcut
The important stuff can all be found at: https://github.com/ergobot/ioioMotorDriver
Fritzing file - "ioioMotorControl.fzz"
Fritzing export - "ioioMotorControl_export.png"
Working example

Description

The ioio is a board to help our android devices interact with sensors, motors, etc.  By using a ioio, Android device, two motors, a motor driver, and a battery, we extend and change the function of our Android device. 

Alternatives - "What's so different?"
There are alternatives.  In fact, I used an Arduino first, with all the same stuff (except the ioio), in order to isolate any ioio problems.    
Unlike many alternatives,  We can design/write a user interface (an android  activity) that is able to directly interact with the outside world (sensors).  We can get an Android device to interact with the physical world by building an app.  The code written for the app is the same code that is used for the ioio.   The ioio becomes an extension of the Android device.  

Bonus - Bluetooth 
The ioio can communicate with the Android device through usb or bluetooth, no extra calls required.  In the past, I've taken apart google's bluetooth example for android so that I could use bluetooth with an Arduino (bluesmirf gold). I made it work but.... it wasn't very fun.   

This example shows how to wire together all of the components needed and write a little to get the components to work.  
  

Before we begin


I'm using the following (with links): 

ioio mint - http://www.adafruit.com/products/885 (originally from http://droidalyzer.com/tech.html)

Motor Driver 1A Dual TB6612FNG - https://www.sparkfun.com/products/9457

Battery (for the motors) - In this case, I used a 2S Lipo 7.4V battery.  A 9V battery would work fine as well.

Two motors - These are toy motors.  I know these motors will work because both motors will run on the same 9v battery.   
  
A breadboard - The breadboard was used to tie the grounds together.

Connect it all together
All of the connections are easy to examine using the fritzing diagram, the picture, or the written java files.  All of these items are available at: https://github.com/ergobot/ioioMotorDriver

1. Connect the ioio mint to the motor driver.  
ioio mint pin #30 to motor driver pin PWMA
ioio mint pin #31 to motor driver pin AIN2
ioio mint pin #32 to motor driver pin AIN1
ioio mint pin #42 to motor driver pin STBY
ioio mint pin #43 to motor driver pin BIN1
ioio mint pin #44 to motor driver pin BIN2
ioio mint pin #45 to motor driver pin PWMB

2. Connect the motors
one motor connected to AO1 and AO2
the other motor connected to B01 and B02

3. Grounds/Power
Connect the ioio mint pin gnd to the battery negative and to the motor driver pin GND (the GND between VCC and AO1)
ioio mint pin 3.3V to the motor driver pin VCC
motor driver pin VM to the battery positive

Fritzing Diagram


Get it to work

Machines of all types need instruction(s). 

If this is your first ioio project, go to the sparkfun ioio intro page:

Writing the code for this wasn't too bad.  I learned from several sites/references:
Making Android Accessories with IOIO by Simon Monk - there are awesome examples and clear instructions
http://www.jaychakravarty.com/?p=146 - This site has great information on how to do the same we're doing here. 
  
I'm not going to go over the code in great detail.  I looked at the two sources above and wrote the Motor.java and MotorDriver.java for simplicity.  These two classes were implemented in MainActivity.java.  

Quick example ---

1.  Declare a new MotorDriver  inside the Looper class


class Looper extends BaseIOIOLooper {
private MotorDriver motorDriver;

}

2. Inside the setup method of your looper, setup your motordriver


@Override
protected void setup() throws ConnectionLostException {
motorDriver = new MotorDriver(ioio_);
motorDriver.SetupMotorA(AIN1_PIN, AIN2_PIN, PWMA_PIN);
motorDriver.SetupMotorB(BIN1_PIN, BIN2_PIN, PWMB_PIN);
motorDriver.SetupStandBy(STBY_PIN);

}

Where AIN1_PIN, AIN2_PIN, PWMA_PIN.... etc are just the pin numbers (int type) you are using on the ioio for the motor driver


3.  In the main loop, use your motor driver


public void loop() throws ConnectionLostException {
// Example Forward movement
motorDriver.MotorA().FullPower();
motorDriver.MotorB().FullPower();
motorDriver.MotorA().Clockwise();
motorDriver.MotorB().Clockwise();
motorDriver.RefreshAll();
}

Example code:


Enjoy!!!
References (anything I used to write this)

first ioio project tutorial - https://www.sparkfun.com/tutorials/280
ioio project using motor driver TB6612FNG- http://www.jaychakravarty.com/?p=146
ioio mint product (adafruit) - http://www.adafruit.com/products/885
ioio mint source (droidalyzer) - http://droidalyzer.com/tech.html