OpenCV2.1
Tuesday, April 23, 2013
Sunday, September 30, 2012
Episode_18_How_Labeling_Using_Countour
How to give labeling on your image :
1. Make your image to Grayscale.
2. Convert Grayscale to data biner (threshold).
3. Filter from noise.
4. Labeling (can do with the other technique from processing image).
5. Get the center point (also you must use a pointer memory to save the data).
6. Give a name.
Ok... Let see How I labeling with cvCountours library on Opencv
in this state i use bottle cap ....
And its the one of my project that I use labeling to detection mobile robot.
When you have done about labeling, you can get more information. For example : center, direction, tracking, degree, distance and etc.
1. Make your image to Grayscale.
2. Convert Grayscale to data biner (threshold).
3. Filter from noise.
4. Labeling (can do with the other technique from processing image).
5. Get the center point (also you must use a pointer memory to save the data).
6. Give a name.
Ok... Let see How I labeling with cvCountours library on Opencv
in this state i use bottle cap ....
/* ROBOT MOBILE */ #include "cv.h" #include "cxcore.h" #include "highgui.h" using namespace std; using namespace cv; int main(int argc, char* argv[]) { int den; int nc,n; CvCapture* capture = cvCaptureFromCAM(0); IplImage* frame; frame = cvQueryFrame(capture); //frame = cvLoadImage("Botol.jpg"); CvSize sz = cvSize( frame->width & -2, frame->height & -2 ); den = 1; IplImage* sg = cvCreateImage( cvSize(sz.width/den, sz.height/den), 8, 3 ); IplImage* grey = cvCreateImage( cvSize(sz.width/den, sz.height/den), 8, 1 ); IplImage* thr = cvCreateImage(cvSize(sz.width/den, sz.height/den), IPL_DEPTH_8U, 1); IplImage* d1 = cvCreateImage( cvSize(sz.width/den, sz.height/den), 8, 1 ); IplImage* d2 = cvCreateImage( cvSize(sz.width/den, sz.height/den), 8, 1 ); CvMemStorage* storage = cvCreateMemStorage(0); CvSeq* contours = 0; /* Setting cvFindContours */ CvScalar red = CV_RGB(250,0,0); CvScalar blue = CV_RGB(0,250,0); CvMoments *moments = (CvMoments*)malloc(sizeof(CvMoments)); CvPoint center; /////////////////////////////////////////////////////////////////// //MEMORY //save data to memory pointer struct datasementara { ///////// ///////// ////// CvPoint cnt[1000]; /// /// // // CvPoint lastcnt[1000]; /// /// /// CvPoint savecnt[1000]; /// /// /// /// /// // // }; ///////// /// ////// //deklarasi variabel struktur datasementara* IT = NULL; IT = new datasementara; /////////////////////////////////////////////////////////////////// /* Inisialiasi font */ CvFont font; double hScale= 1; double vScale= 1; int lineWidth=2; cvInitFont(&font,CV_FONT_HERSHEY_SIMPLEX|CV_FONT_ITALIC, hScale,vScale,0,lineWidth); while(cvWaitKey(1) != 'q') { frame = cvQueryFrame(capture); //frame = cvLoadImage("Botol.jpg"); if( !frame ) break; cvResize(frame,sg); cvCvtColor(sg,grey,CV_RGB2GRAY); cvClearMemStorage(storage); cvAdaptiveThreshold(grey, thr,255,CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,71,5); cvNot(thr,d1); cvErode(d1,d1,NULL,15); cvSmooth(d1,d1,2,9); cvDilate(d1,d1,NULL,12); cvCopy(d1,d2); //buat clonning gambar /* deteksi botol */ nc = cvFindContours(d2, storage, &contours,sizeof(CvContour),CV_RETR_EXTERNAL); printf("nc = %d\n",nc); n = 0; for( CvSeq* c=contours; c!=NULL; c=c->h_next ) { cvDrawContours(sg,c,red,blue,0,2,8); //printf("Contour #%d\n", n ); //printf(" %d elements:\n", c->total ); cvMoments( c, moments, 0 ); double m00 = cvGetSpatialMoment( moments, 0, 0) ; double m10 = cvGetSpatialMoment( moments, 1, 0) ; double m01 = cvGetSpatialMoment( moments, 0, 1); center.x = (float)(m10 / m00); center.y = (float)(m01 / m00); (*IT).cnt[n].x = center.x; (*IT).cnt[n].y = center.y; //printf(" (%d,%d)\n", center.x, center.y ); /*for( int i=0; i<c->total; ++i ) { CvPoint* p = CV_GET_SEQ_ELEM( CvPoint, c, i ); printf(" (%d,%d)\n", p->x, p->y ); }*/ n++; }//end deteksi center for( int i = 0; i < n; i++ ){ cvCircle(sg,(*IT).cnt[i],2,cvScalar(0,255,0),2); /* Tampilkan font */ char jumlah[200]; char labelRobotfont[200]; itoa (abs(i),jumlah,10); cvPutText (sg,jumlah,cvPoint( (*IT).cnt[i].x, (*IT).cnt[i].y), &font, cvScalar(0,255,0)); } cvShowImage("original", sg); cvShowImage("result", d1); cvShowImage("thr", thr); } cvDestroyAllWindows(); // close your console windows cvReleaseImage(&frame); return 0; }
And its the one of my project that I use labeling to detection mobile robot.
When you have done about labeling, you can get more information. For example : center, direction, tracking, degree, distance and etc.
Tuesday, April 10, 2012
Episode_17_Implementation_of_the_Fast_Hough_Transform
Hello..!
After several months I decided focus to do my final project. I come back again to write my blog and sory for my poor english.. hehe.. (learning)
After several months I decided focus to do my final project. I come back again to write my blog and sory for my poor english.. hehe.. (learning)
Human Vs Machine
Rubik is a puzzle game that shaped cube is very popular around the world.
in this Episode, I am using rubik to implementation of hough transform method..
Why Rubik ?
How to solve rubik so easily
Development of Computer Vision Technology in the world
.
.
.
.
.
.
.
.
.
.
Yes, Just Do It ... Sir
First.. You need make orignal image to biner image
Original image |
Threshold image |
Using Hough Transform
- Find objects in the image by using the edges (outline) objects.
- Have a threshold parameter to determine its sensitivity to detect the edge line.
Hough Transform |
Lines that we just need to detection
Ratio total lines with threshold
Ok LET'S ROCK with Math :)
Filter 1
- Finding a pair of lines which have nearly the same length
A pair of lines which have been selected must have a length of approximately 30%
Filter 2
- Finding a pair of intersecting lines
A pair of intersecting lines that already should have a den value greater than 0.1
- Finding a pair of lines that form right angles and distances are almost similar
A pair of lines should have a lower angle 0.5 radians (30 degrees)
Filter 4
- Taking the point of the image are detected in a corner and the coordinates to be corresponding line
In the local coordinate system, either the first or the second coordinate is exactly 1/3 or 2/3 within a margin of 0.05, and neither of the coordinates is outside of the range [-0.1,1.1]
Last.............. !
- Set the coordinate system list according to the number of parallel line segments
The Result
This the Process of Image Processing
So This is my robot rubik
Yeah Still Noob But That's Ok... I build it with myself and I think it's enough for studies.
How I Build ?
Other Implementation made by me
Nobody is born stupid. Some people are born with less knowledge than others, and some people learn must faster and easier than others, it's just a matter of willingness to learn or not.
SORY IF U FIND FOREIGN LANGUAGE
IT'S INDONESIAN LANGUAGE
reference :
http://karpathy.ca/portfolio/project525.php
link for download :
https://docs.google.com/open?id=0Bzm13VWqKhTvYUI5cWtVUnhKTGc
link for download :
https://docs.google.com/open?id=0Bzm13VWqKhTvYUI5cWtVUnhKTGc
Monday, August 22, 2011
Episode_16_RGB_Color_on_Form_Application
Sometimes we need how to make RGB on Form Application or making 3 channel to 1 channel. Difficult ?? I don't think so.
Well, this is request for my freind that give me idea to write next Episode. I need your help (Reader) to give me a comment if there is something wrong with my source code, so I can correctly as soon as possible.As in Episode 4, I just add a little trick.
Well, this is request for my freind that give me idea to write next Episode. I need your help (Reader) to give me a comment if there is something wrong with my source code, so I can correctly as soon as possible.As in Episode 4, I just add a little trick.
For Blue Color.
private: System::Void timer1_Tick(System::Object^ sender, System::EventArgs^ e) { frame = cvQueryFrame(capture); cvCvtColor(frame,img,CV_BGR2GRAY); cvMerge(NULL , img, img, NULL, img2); //Composes a multi-channel array from several single-channel arrays or inserts a single channel into the array. pictureBox1->Image = gcnew //replacement of cvShowImage System::Drawing::Bitmap(img2->width,img2->height,img2->widthStep, System::Drawing::Imaging::PixelFormat::Format24bppRgb,(System::IntPtr) img2->imageData); pictureBox1->Refresh(); }For Green Color.
private: System::Void timer1_Tick(System::Object^ sender, System::EventArgs^ e) { frame = cvQueryFrame(capture); cvCvtColor(frame,img,CV_BGR2GRAY); cvMerge(img , NULL, img, NULL, img2); //Composes a multi-channel array from several single-channel arrays or inserts a single channel into the array. pictureBox1->Image = gcnew //replacement of cvShowImage System::Drawing::Bitmap(img2->width,img2->height,img2->widthStep, System::Drawing::Imaging::PixelFormat::Format24bppRgb,(System::IntPtr) img2->imageData); pictureBox1->Refresh(); }For Red Color.
private: System::Void timer1_Tick(System::Object^ sender, System::EventArgs^ e) { frame = cvQueryFrame(capture); cvCvtColor(frame,img,CV_BGR2GRAY); cvMerge(img , img, NULL, NULL, img2); //Composes a multi-channel array from several single-channel arrays or inserts a single channel into the array. pictureBox1->Image = gcnew //replacement of cvShowImage System::Drawing::Bitmap(img2->width,img2->height,img2->widthStep, System::Drawing::Imaging::PixelFormat::Format24bppRgb,(System::IntPtr) img2->imageData); pictureBox1->Refresh(); }Yahuiii.. Easy isn't? You can make other colors with changing cvMerge and tell me what other colors you can get it. Ok
Episode_15_Smart_Camera (haarcascades)
In general, we only know that function of digital camera as take a picture or record the image and save the photographs on the digital memory cards, but keeping up with technology, camera can be smarter that can detect our face, eyes, mouth, and fullbody. Of course, it's very useful to help people identification quickly and accuratel for Home security or other.
In electrical engeenerng, we already know some kinds of sensor such as ultrasonic, LM35, photodioda, etc. How about using camera as a sensor !! Well, in this episode, I will share my experiment when I was learning to detect faces with haarcascades. This method just for beginer who want detect faces easily without using hard artificial intelligence systems such as eigenfaces and neural network. So what are you waiting for, Let's try. (Talk Less Do More)
Ok, First you need to copy haarcascades data into your project before you run your program.
picture 0_haarcascades_data
This source code :
/** * Display video from webcam and detect faces */ #include "stdio.h" #include "cv.h" #include "highgui.h" CvHaarClassifierCascade *cascade; CvMemStorage *storage; int key; void detectFaces( IplImage *img ); int main( int argc, char** argv ) { CvCapture *capture; IplImage *frame; char *filename = "haarcascade_frontalface_alt2.xml"; /* load the classifier note that I put the file in the same directory with this code */ cascade = ( CvHaarClassifierCascade* )cvLoad( filename, 0, 0, 0 ); /* setup memory buffer; needed by the face detector */ storage = cvCreateMemStorage( 0 ); /* initialize camera */ capture = cvCaptureFromCAM( 0 ); cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 320 ); cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 240 ); /* always check */ assert( cascade && storage && capture ); /* create a window */ cvNamedWindow( "video", 1 ); while( key != 'q' ) { /* get a frame */ frame = cvQueryFrame( capture ); /* always check */ if( !frame ) break; /* detect faces and display video */ detectFaces( frame ); /* quit if user press 'q' */ key = cvWaitKey( 10 ); } /* free memory */ cvReleaseCapture( &capture ); cvDestroyWindow( "video" ); cvReleaseHaarClassifierCascade( &cascade ); cvReleaseMemStorage( &storage ); return 0; } void detectFaces( IplImage *img ) { int i; /* detect faces */ CvSeq *faces = cvHaarDetectObjects( img, cascade, storage, 1.1, 3, 0 /*CV_HAAR_DO_CANNY_PRUNNING*/, cvSize( 40, 40 ) ); /* for each face found, draw a red box */ for( i = 0 ; i < ( faces ? faces->total : 0 ) ; i++ ) { CvRect *r = ( CvRect* )cvGetSeqElem( faces, i ); cvRectangle( img, cvPoint( r->x, r->y ), cvPoint( r->x + r->width, r->y + r->height ), CV_RGB( 255, 0, 0 ), 1, 8, 0 ); } /* display video */ cvShowImage( "video", img ); }
Friday, August 19, 2011
Episode_14_Camera_calibration (Simple Augmented Reality)
Augmented Reality
Bagaimana kalau dalam episode 14 ini saya menggunakan bahasa indonesia!! Yaa setidaknya untuk memperingati Hari Kemerdekan Indonesia.. Merdeka2x. Walaupun bahasa indonesia saya sama buruknya.. Hehe
Apakah kalian sudah tau tentang Augmented Reality ?
AR sendiri merupakan penggabungan benda-benda nyata dan maya di lingkungan sesungguhnya, berjalan secara interaktif dalam waktu nyata, dan terdapat integrasi antar-benda dalam tiga dimensi, yaitu benda maya terintegrasi dalam dunia nyata. Penggabungan benda nyata dan maya dimungkinkan dengan teknologi tampilan yang sesuai, interaktivitas dimungkinkan melalui perangkat-perangkat input tertentu, dan integrasi yang baik memerlukan penjejakan yang efektif.
Selain menambahkan benda maya dalam lingkungan nyata, AR juga berpotensi menghilangkan benda-benda yang sudah ada. Menambah sebuah lapisan gambar maya dimungkinkan untuk menghilangkan atau menyembunyikan lingkungan nyata dari pandangan pengguna. Misalnya, untuk menyembunyikan sebuah meja dalam lingkungan nyata, perlu digambarkan lapisan representasi tembok dan lantai kosong yang diletakkan di atas gambar meja nyata, sehingga menutupi meja nyata dari pandangan pengguna.
Nih langsung aja liat videonya:
Nih Arek Lab sing diblokir youtubenya liat di vimeo aja.. Saknoeh haha
Yupz... Keren gak Bro..
Tadi itu software developt AR dengan ARToolkit, Flartoolkit, ARTag dan Popcode. Anda dapat mencarinya sendiri di Mbah Google, karena saya disini akan lebih membahas gimana membuat simpleAR yang mudah dengan menggunakan OpenCV.
Ok langsuuuuuung saja apa saja yang kita perlukan:
while(key!='q') { image = cvQueryFrame( capture ); if( !image ) break; int found = cvFindChessboardCorners(image, b_size, corners, &corner_count, CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS); cvCvtColor(image, gray, CV_BGR2GRAY); //This function identifies the pattern from the gray image, saves the valid group of corners cvFindCornerSubPix(gray, corners, corner_count, cvSize(11,11),cvSize(-1,-1), cvTermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 )); printf("corner_count %d \n",corner_count); if( corner_count == b_squares )
picture 0_SimpAr_using_OpenCV
Download File :
Thursday, August 18, 2011
Episode_13_Camera_CCTV
My other projects that I created on my vacation time..
I'm just a newbie who just want to learn more about programming.. I hope it can be useful for you
Required :
- Opencv 2.1 (you can see on Episode 2)
- Visual C++ 2009
In this project.. I learned something new like as record camera stereo and record video.
Download File:
Labels:
Episode_13 : Camera_CCTV
Subscribe to:
Posts (Atom)