*Lunagariya, Jaydeep
CECS-553 Machine Vision
Spring 2014
Project Description
Many computer vision applications provide vast knowledge about the line in an image. Manually extraction of the line information from an image can be very exhausting and time-consuming; especially there are many lines in the image. An automatic method is desirable, but it is not as trivial as edge detection since if any, one has to detect which edge points belongs to which line. The Hough-transform is more preferable to make this separation possible and is the method I have used in my program for line detection.
In this project, issues regarding the Hough Transform for line detection are considered. The first several sections deal with theory regarding the Hough transform, then the final section discusses an implementation of the Hough transform for line detection and gives resulted images. The program, images, and figures for this project are implemented using the Matlab.
Project Background: Theory of the Hough Transform
The Hough transform (HT) is a powerful global method for detecting edges. It transforms between the Cartesian space and a parameter space in which a straight line (or other boundary formulation) can be defined. Line detection using Hough Transform is point‐line duality. Let’s assume the case where we have straight lines in an image. We first note that for every point (x_i,y_i) in that image, all the straight lines passing through that point satisfy Eq. (1) for varying values of line slope and intercept (m, c) , see Fig 1. y_i = mx_i + c Eq. (1)
Fig 1: Lines through a point in the Fig 2: The (m, c) domain.
Cartesian domain.
Now if we reverse our variables and look instead at the values of (m, c) as a function of the image poin...
... middle of paper ...
... Input Image (B2bomber.bmp) Fig 7: Converted Gray Image
Compute the threshold value of an input image.
The histogram of an input image is computed for selection of threshold value of a converted gray image. MATLABs ‘imhist(…)’ is the function that is used generate histogram. The appropriate threshold value has been selected, which is, then, applied to an image to threshold itself. Fig 8 and Fig 9 show an example of such images.
Fig 8: Histogram of the Gray Image Fig 9: Image after Thresholding
Apply edge detection to a selected image using different gradient kernels (Sobel, Prewitt, and Roberts), or other methods such as: Canny or zero crossings.
The MATALBs ‘edge(… )’ function is used to detect edges in the input image with various options for an argument (e.g. ‘Sobel’, ‘Canny’, ‘Prewitt’, ‘zerocross’). An example of detected edges is shown in Fig 10.
The flow of the operation starts from capturing the image, after which is the process image for detection of the borders as well as the golf balls. Using image processing base on the RGB values or the white and non-white image, the system is capable of distinguishing whether the object is a golf ball or not. The logic of the system is created using neural networks. The logic is programmed in such a way that the robot is able to determine how far the golf ball is and whether the object is actually a golf ball or not. The dimples of the golf ball are also one of the considerations they use in creating the logic of the system. The person is able to set the boundaries for the robot to move around. This is done by putting a boundary using blue or red tapes for the robot to sense. The border detection and avoidance is used to prevent the robot from going out of the prescribed area.
Retinal vessel segmentation is important for the diagnosis of numerous eye diseases and plays an important role in automatic retinal disease screening systems. Automatic segmentation of retinal vessels and characterization of morphological attributes such as width, length, tortuosity, branching pattern and angle are utilized for the diagnosis of different cardiovascular and ophthalmologic diseases. Manual segmentation of retinal blood vessels is a long and tedious task which also requires training and skill. It is commonly accepted by the medical community that automatic quantification of retinal vessels is the first step in the development of a computer-assisted diagnostic system for ophthalmic disorders. A large number of algorithms for retinal vasculature segmentation have been proposed. The algorithms can be classified as pattern recognition techniques, matched filtering, vessel tracking, mathematical morphology, multiscale approaches, and model based approaches. The first paper on retinal blood vessel segmentation appeared in 1989 by Chaudhuri et al. [21]...
This work uses diagonal and zigzag lines on the limbs and branches of the trees. These are good line types to use since they resemble nature. Curved lines are
...omated detection of lines and points in the images and the use of smart markers in reference video recordings.
The proposed multimodel segmentation was tested with almost all combination of mass shapes and margins in CC and MLO views and the segmented abnormal region was verified with ground truth table images in which abnormality marked by radiologist in the DDSM database. Further feature extraction methods and classifier has to be developed for fully automated diagnosis CAD system. Further study has to be carried out to test the algorithm for the segmentation of micro calcifications.
Feature extraction on the basis of principle lines: Any palm print have several principal lines in it, on the basis of these feature extraction is quiet useful for recognition and extraction of palm print recognition system.
The large width Gaussian masks are not preferred as detector's sensitivity to noise is low and moreover, the localization error in the detected edges also increases with increase in Gaussian mask width. Step 2:- After the initial pre-processing steps of smoothening and removal of noise, the edge strength is calculated by taking the gradient of the image. For the purpose of edge detection in an image, the Sobel operator first performs a 2-D spatial gradient measurement with the help of convolution masks. The convolution masks used are of the size 3X3, where one is used to calculate the horizontal gradient(Gx) while the other is used to calculate the vertical gradient(Gy). Then, the approximate absolute edge strength can be calculated at each point.
For the extraction of the depth map includes three parts, image block motion Extraction, color segmentation, Depth map average fusion.
Fisher discriminants find the line that best separates the points. To identify an input test image, the projected test image is compared to each projected training image, and the test image is identified as the closest training image (Zhao, Chellappa & Phillips, 1999).
At feature extraction part, authors proposed geometric and appearance method to extract the facial feature points out and some authors also stated that by using both combination approaches, it will increase the accuracy compared to the system which only used one approach. From here, it already gave the idea to the project system that it has no harm in applying two approaches together.
Patil et. al.(2010) [10] suggested to use K-means image segmenattion provided the number of clusters is estimated in accurate manner. They proposed a Phase congruency based method for edge detection to estimate number of clusters. Threshold and Euclidean distance is used as similarity measure for making clusters. K-means is used to find the final segmentation of image. Experiments are performed on MATLAB and results shows that number of clusters is accurate and
It then applied circular hough transform to the template. It is assumed to be effective for images with high specular reflections, but since it hough transform uses brute force approach, hence computationally intensive.
Correlation- based method: It uses richer gray scale information. It overcome problems of above method, it can work with bad quality data. But it has some of its own problems like localization of points.
...zontal edges in blurred image. Then one stage is non maximum suppression, it is an edge thinning technique. Then canny operator trace edges through threshold. Differential edge detection can also be used to obtain edges. The result of it is shown in fig2.4.
or roads - you should note small markings such as trees or pathways, anything to help you