903 resultados para Computer vision industry
Resumo:
Since last two decades researches have been working on developing systems that can assistsdrivers in the best way possible and make driving safe. Computer vision has played a crucialpart in design of these systems. With the introduction of vision techniques variousautonomous and robust real-time traffic automation systems have been designed such asTraffic monitoring, Traffic related parameter estimation and intelligent vehicles. Among theseautomatic detection and recognition of road signs has became an interesting research topic.The system can assist drivers about signs they don’t recognize before passing them.Aim of this research project is to present an Intelligent Road Sign Recognition System basedon state-of-the-art technique, the Support Vector Machine. The project is an extension to thework done at ITS research Platform at Dalarna University [25]. Focus of this research work ison the recognition of road signs under analysis. When classifying an image its location, sizeand orientation in the image plane are its irrelevant features and one way to get rid of thisambiguity is to extract those features which are invariant under the above mentionedtransformation. These invariant features are then used in Support Vector Machine forclassification. Support Vector Machine is a supervised learning machine that solves problemin higher dimension with the help of Kernel functions and is best know for classificationproblems.
Resumo:
The objective of this thesis work, is to propose an algorithm to detect the faces in a digital image with complex background. A lot of work has already been done in the area of face detection, but drawback of some face detection algorithms is the lack of ability to detect faces with closed eyes and open mouth. Thus facial features form an important basis for detection. The current thesis work focuses on detection of faces based on facial objects. The procedure is composed of three different phases: segmentation phase, filtering phase and localization phase. In segmentation phase, the algorithm utilizes color segmentation to isolate human skin color based on its chrominance properties. In filtering phase, Minkowski addition based object removal (Morphological operations) has been used to remove the non-skin regions. In the last phase, Image Processing and Computer Vision methods have been used to find the existence of facial components in the skin regions.This method is effective on detecting a face region with closed eyes, open mouth and a half profile face. The experiment’s results demonstrated that the detection accuracy is around 85.4% and the detection speed is faster when compared to neural network method and other techniques.
Resumo:
The project introduces an application using computer vision for Hand gesture recognition. A camera records a live video stream, from which a snapshot is taken with the help of interface. The system is trained for each type of count hand gestures (one, two, three, four, and five) at least once. After that a test gesture is given to it and the system tries to recognize it.A research was carried out on a number of algorithms that could best differentiate a hand gesture. It was found that the diagonal sum algorithm gave the highest accuracy rate. In the preprocessing phase, a self-developed algorithm removes the background of each training gesture. After that the image is converted into a binary image and the sums of all diagonal elements of the picture are taken. This sum helps us in differentiating and classifying different hand gestures.Previous systems have used data gloves or markers for input in the system. I have no such constraints for using the system. The user can give hand gestures in view of the camera naturally. A completely robust hand gesture recognition system is still under heavy research and development; the implemented system serves as an extendible foundation for future work.
Resumo:
Parkinson’s disease (PD) is an increasing neurological disorder in an aging society. The motor and non-motor symptoms of PD advance with the disease progression and occur in varying frequency and duration. In order to affirm the full extent of a patient’s condition, repeated assessments are necessary to adjust medical prescription. In clinical studies, symptoms are assessed using the unified Parkinson’s disease rating scale (UPDRS). On one hand, the subjective rating using UPDRS relies on clinical expertise. On the other hand, it requires the physical presence of patients in clinics which implies high logistical costs. Another limitation of clinical assessment is that the observation in hospital may not accurately represent a patient’s situation at home. For such reasons, the practical frequency of tracking PD symptoms may under-represent the true time scale of PD fluctuations and may result in an overall inaccurate assessment. Current technologies for at-home PD treatment are based on data-driven approaches for which the interpretation and reproduction of results are problematic. The overall objective of this thesis is to develop and evaluate unobtrusive computer methods for enabling remote monitoring of patients with PD. It investigates first-principle data-driven model based novel signal and image processing techniques for extraction of clinically useful information from audio recordings of speech (in texts read aloud) and video recordings of gait and finger-tapping motor examinations. The aim is to map between PD symptoms severities estimated using novel computer methods and the clinical ratings based on UPDRS part-III (motor examination). A web-based test battery system consisting of self-assessment of symptoms and motor function tests was previously constructed for a touch screen mobile device. A comprehensive speech framework has been developed for this device to analyze text-dependent running speech by: (1) extracting novel signal features that are able to represent PD deficits in each individual component of the speech system, (2) mapping between clinical ratings and feature estimates of speech symptom severity, and (3) classifying between UPDRS part-III severity levels using speech features and statistical machine learning tools. A novel speech processing method called cepstral separation difference showed stronger ability to classify between speech symptom severities as compared to existing features of PD speech. In the case of finger tapping, the recorded videos of rapid finger tapping examination were processed using a novel computer-vision (CV) algorithm that extracts symptom information from video-based tapping signals using motion analysis of the index-finger which incorporates a face detection module for signal calibration. This algorithm was able to discriminate between UPDRS part III severity levels of finger tapping with high classification rates. Further analysis was performed on novel CV based gait features constructed using a standard human model to discriminate between a healthy gait and a Parkinsonian gait. The findings of this study suggest that the symptom severity levels in PD can be discriminated with high accuracies by involving a combination of first-principle (features) and data-driven (classification) approaches. The processing of audio and video recordings on one hand allows remote monitoring of speech, gait and finger-tapping examinations by the clinical staff. On the other hand, the first-principles approach eases the understanding of symptom estimates for clinicians. We have demonstrated that the selected features of speech, gait and finger tapping were able to discriminate between symptom severity levels, as well as, between healthy controls and PD patients with high classification rates. The findings support suitability of these methods to be used as decision support tools in the context of PD assessment.
Resumo:
Background: Previous assessment methods for PG recognition used sensor mechanisms for PG that may cause discomfort. In order to avoid stress of applying wearable sensors, computer vision (CV) based diagnostic systems for PG recognition have been proposed. Main constraints in these methods are the laboratory setup procedures: Novel colored dresses for the patients were specifically designed to segment the test body from a specific colored background. Objective: To develop an image processing tool for home-assessment of Parkinson Gait(PG) by analyzing motion cues extracted during the gait cycles. Methods: The system is based on the idea that a normal body attains equilibrium during the gait by aligning the body posture with the axis of gravity. Due to the rigidity in muscular tone, persons with PD fail to align their bodies with the axis of gravity. The leaned posture of PD patients appears to fall forward. Whereas a normal posture exhibits a constant erect posture throughout the gait. Patients with PD walk with shortened stride angle (less than 15 degrees on average) with high variability in the stride frequency. Whereas a normal gait exhibits a constant stride frequency with an average stride angle of 45 degrees. In order to analyze PG, levodopa-responsive patients and normal controls were videotaped with several gait cycles. First, the test body is segmented in each frame of the gait video based on the pixel contrast from the background to form a silhouette. Next, the center of gravity of this silhouette is calculated. This silhouette is further skeletonized from the video frames to extract the motion cues. Two motion cues were stride frequency based on the cyclic leg motion and the lean frequency based on the angle between the leaned torso tangent and the axis of gravity. The differences in the peaks in stride and lean frequencies between PG and normal gait are calculated using Cosine Similarity measurements. Results: High cosine dissimilarity was observed in the stride and lean frequencies between PG and normal gait. High variations are found in the stride intervals of PG whereas constant stride intervals are found in the normal gait. Conclusions: We propose an algorithm as a source to eliminate laboratory constraints and discomfort during PG analysis. Installing this tool in a home computer with a webcam allows assessment of gait in the home environment.
Resumo:
The national railway administrations in Scandinavia, Germany, and Austria mainly resort to manual inspections to control vegetation growth along railway embankments. Manually inspecting railways is slow and time consuming. A more worrying aspect concerns the fact that human observers are often unable to estimate the true cover of vegetation on railway embankments. Further human observers often tend to disagree with each other when more than one observer is engaged for inspection. Lack of proper techniques to identify the true cover of vegetation even result in the excess usage of herbicides; seriously harming the environment and threating the ecology. Hence work in this study has investigated aspects relevant to human variationand agreement to be able to report better inspection routines. This was studied by mainly carrying out two separate yet relevant investigations.First, thirteen observers were separately asked to estimate the vegetation cover in nine imagesacquired (in nadir view) over the railway tracks. All such estimates were compared relatively and an analysis of variance resulted in a significant difference on the observers’ cover estimates (p<0.05). Bearing in difference between the observers, a second follow-up field-study on the railway tracks was initiated and properly investigated. Two railway segments (strata) representingdifferent levels of vegetationwere carefully selected. Five sample plots (each covering an area of one-by-one meter) were randomizedfrom each stratumalong the rails from the aforementioned segments and ten images were acquired in nadir view. Further three observers (with knowledge in the railway maintenance domain) were separately asked to estimate the plant cover by visually examining theplots. Again an analysis of variance resulted in a significant difference on the observers’ cover estimates (p<0.05) confirming the result from the first investigation.The differences in observations are compared against a computer vision algorithm which detects the "true" cover of vegetation in a given image. The true cover is defined as the amount of greenish pixels in each image as detected by the computer vision algorithm. Results achieved through comparison strongly indicate that inconsistency is prevalent among the estimates reported by the observers. Hence, an automated approach reporting the use of computer vision is suggested, thus transferring the manual inspections into objective monitored inspections
Resumo:
This paper presents a computer-vision based marker-free method for gait-impairment detection in Patients with Parkinson's disease (PWP). The system is based upon the idea that a normal human body attains equilibrium during the gait by aligning the body posture with Axis-of-Gravity (AOG) using feet as the base of support. In contrast, PWP appear to be falling forward as they are less-able to align their body with AOG due to rigid muscular tone. A normal gait exhibits periodic stride-cycles with stride-angle around 45o between the legs, whereas PWP walk with shortened stride-angle with high variability between the stride-cycles. In order to analyze Parkinsonian-gait (PG), subjects were videotaped with several gait-cycles. The subject's body was segmented using a color-segmentation method to form a silhouette. The silhouette was skeletonized for motion cues extraction. The motion cues analyzed were stride-cycles (based on the cyclic leg motion of skeleton) and posture lean (based on the angle between leaned torso of skeleton and AOG). Cosine similarity between an imaginary perfect gait pattern and the subject gait patterns produced 100% recognition rate of PG for 4 normal-controls and 3 PWP. Results suggested that the method is a promising tool to be used for PG assessment in home-environment.
Resumo:
Point pattern matching in Euclidean Spaces is one of the fundamental problems in Pattern Recognition, having applications ranging from Computer Vision to Computational Chemistry. Whenever two complex patterns are encoded by two sets of points identifying their key features, their comparison can be seen as a point pattern matching problem. This work proposes a single approach to both exact and inexact point set matching in Euclidean Spaces of arbitrary dimension. In the case of exact matching, it is assured to find an optimal solution. For inexact matching (when noise is involved), experimental results confirm the validity of the approach. We start by regarding point pattern matching as a weighted graph matching problem. We then formulate the weighted graph matching problem as one of Bayesian inference in a probabilistic graphical model. By exploiting the existence of fundamental constraints in patterns embedded in Euclidean Spaces, we prove that for exact point set matching a simple graphical model is equivalent to the full model. It is possible to show that exact probabilistic inference in this simple model has polynomial time complexity with respect to the number of elements in the patterns to be matched. This gives rise to a technique that for exact matching provably finds a global optimum in polynomial time for any dimensionality of the underlying Euclidean Space. Computational experiments comparing this technique with well-known probabilistic relaxation labeling show significant performance improvement for inexact matching. The proposed approach is significantly more robust under augmentation of the sizes of the involved patterns. In the absence of noise, the results are always perfect.
Resumo:
As telecomunicações no mundo têm avançado a passos largos na oferta de novas tecnologias e padrões que viabilizam e flexibilizam a transmissão / recepção de informações entre pessoas e a Internet. Em especial, no que tange à ubiqüidade, o uso de dispositivos de comunicações móveis sem fio, como telefones celulares e PDAs (Personnal Digital Assistant), tem permitido às empresas alcançarem seus clientes a qualquer hora e em qualquer lugar. Muitos padrões de comunicação wireless têm surgido, inicialmente na indústria de telefonia móvel celular e, em seguida, na indústria de computadores e PDAs, habilitando a comunicação wireless de dados em banda larga e o comércio eletrônico móvel (m-commerce). Em especial, o padrão Wi-Fi tem sido difundido mundialmente através da expansão de redes públicas sem fio (PWLANs). Assim, os fabricantes de equipamentos de telecomunicações, as empresas operadoras de serviços de telefonia fixa e móvel e até provedores de acesso à Internet têm manifestado grande interesse nessa área por perceberem novas oportunidades de aumento de receita através da tecnologia Wi-Fi. Todos estes aspectos da recente história do Wi-Fi têm gerado questionamentos quanto a seu futuro sucesso e real geração de vantagem competitiva sustentável, não obstante o volume de negócios relativos a esta tecnologia estar em franco crescimento. Este trabalho propõe-se a analisar o mercado de serviços PWLAN Wi-Fi brasileiro, identificando os principais atores e os modelos de negócio praticados por eles, comparando esses modelos aos modelos identificados por Shubar e Lechner. O estudo propõe-se, também, a avaliar tais empresas e seus respectivos modelos de negócio segundo o framework VRIO desenvolvido por Barney com base na visão estratégica baseada em recursos (RBV- Resource Based View).
Resumo:
Computer vision is a field that uses techniques to acquire, process, analyze and understand images from the real world in order to produce numeric or symbolic information in the form of decisions [1]. This project aims to use computer vision to prepare an app to analyze a Madeira Wine and characterize it (identify its variety) by its color. Dry or sweet wines, young or old wines have a specific color. It uses techniques to compare histograms in order to analyze the images taken from a test sample inside a special container designed for this purpose. The color analysis from a wine sample using an image captured by a smartphone can be difficult. Many factors affect the captured image such as, light conditions, the background of the sample container due to the many positions the photo can be taken (different to capture facing a white wall or facing the floor for example). Using new technologies such as 3D printing it was possible to create a prototype that aims to control the effect of those external factors on the captured image. The results for this experiment are good indicators for future works. Although it’s necessary to do more tests, the first tests had a success rate of 80% to 90% of correct results. This report documents the development of this project and all the techniques and steps required to execute the tests.