6 resultados para video database system

em Dalarna University College Electronic Archive


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is related to the broad subject of automatic motion detection and analysis in videosurveillance image sequence. Besides, proposing the new unique solution, some of the previousalgorithms are evaluated, where some of the approaches are noticeably complementary sometimes.In real time surveillance, detecting and tracking multiple objects and monitoring their activities inboth outdoor and indoor environment are challenging task for the video surveillance system. Inpresence of a good number of real time problems limits scope for this work since the beginning. Theproblems are namely, illumination changes, moving background and shadow detection.An improved background subtraction method has been followed by foreground segmentation, dataevaluation, shadow detection in the scene and finally the motion detection method. The algorithm isapplied on to a number of practical problems to observe whether it leads us to the expected solution.Several experiments are done under different challenging problem environment. Test result showsthat under most of the problematic environment, the proposed algorithm shows the better qualityresult.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis project is to develop the Traffic Sign Recognition algorithm for real time. Inreal time environment, vehicles move at high speed on roads. For the vehicle intelligent system itbecomes essential to detect, process and recognize the traffic sign which is coming in front ofvehicle with high relative velocity, at the right time, so that the driver would be able to pro-actsimultaneously on instructions given in the Traffic Sign. The system assists drivers about trafficsigns they did not recognize before passing them. With the Traffic Sign Recognition system, thevehicle becomes aware of the traffic environment and reacts according to the situation.The objective of the project is to develop a system which can recognize the traffic signs in real time.The three target parameters are the system’s response time in real-time video streaming, the trafficsign recognition speed in still images and the recognition accuracy. The system consists of threeprocesses; the traffic sign detection, the traffic sign recognition and the traffic sign tracking. Thedetection process uses physical properties of traffic signs based on a priori knowledge to detect roadsigns. It generates the road sign image as the input to the recognition process. The recognitionprocess is implemented using the Pattern Matching algorithm. The system was first tested onstationary images where it showed on average 97% accuracy with the average processing time of0.15 seconds for traffic sign recognition. This procedure was then applied to the real time videostreaming. Finally the tracking of traffic signs was developed using Blob tracking which showed theaverage recognition accuracy to 95% in real time and improved the system’s average response timeto 0.04 seconds. This project has been implemented in C-language using the Open Computer VisionLibrary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic Control Signs or destination boards on roadways offer significant information for drivers. Regulation signs tell something like your speed, turns, etc; Warning signs warn drivers of conditions ahead to help them avoid accidents; Destination signs show distances and directions to various locations; Service signs display location of hospitals, gas and rest areas etc. Because the signs are so important and there is always a certain distance from them to drivers, to let the drivers get information clearly and easily even in bad weather or other situations. The idea is to develop software which can collect useful information from a special camera which is mounted in the front of a moving car to extract the important information and finally show it to the drivers. For example, when a frame contains on a destination drive sign board it will be text something like "Linkoping 50",so the software should extract every character of "Linkoping 50", compare them with the already known character data in the database. if there is extracted character match "k" in the database then output the destination name and show to the driver. In this project C++ will be used to write the code for this software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The project introduces an application using computer vision for Hand gesture recognition. A camera records a live video stream, from which a snapshot is taken with the help of interface. The system is trained for each type of count hand gestures (one, two, three, four, and five) at least once. After that a test gesture is given to it and the system tries to recognize it.A research was carried out on a number of algorithms that could best differentiate a hand gesture. It was found that the diagonal sum algorithm gave the highest accuracy rate. In the preprocessing phase, a self-developed algorithm removes the background of each training gesture. After that the image is converted into a binary image and the sums of all diagonal elements of the picture are taken. This sum helps us in differentiating and classifying different hand gestures.Previous systems have used data gloves or markers for input in the system. I have no such constraints for using the system. The user can give hand gestures in view of the camera naturally. A completely robust hand gesture recognition system is still under heavy research and development; the implemented system serves as an extendible foundation for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To design, develop and set up a web-based system for enabling graphical visualization of upper limb motor performance (ULMP) of Parkinson’s disease (PD) patients to clinicians. Background Sixty-five patients diagnosed with advanced PD have used a test battery, implemented in a touch-screen handheld computer, in their home environment settings over the course of a 3-year clinical study. The test items consisted of objective measures of ULMP through a set of upper limb motor tests (finger to tapping and spiral drawings). For the tapping tests, patients were asked to perform alternate tapping of two buttons as fast and accurate as possible, first using the right hand and then the left hand. The test duration was 20 seconds. For the spiral drawing test, patients traced a pre-drawn Archimedes spiral using the dominant hand, and the test was repeated 3 times per test occasion. In total, the study database consisted of symptom assessments during 10079 test occasions. Methods Visualization of ULMP The web-based system is used by two neurologists for assessing the performance of PD patients during motor tests collected over the course of the said study. The system employs animations, scatter plots and time series graphs to visualize the ULMP of patients to the neurologists. The performance during spiral tests is depicted by animating the three spiral drawings, allowing the neurologists to observe real-time accelerations or hesitations and sharp changes during the actual drawing process. The tapping performance is visualized by displaying different types of graphs. Information presented included distribution of taps over the two buttons, horizontal tap distance vs. time, vertical tap distance vs. time, and tapping reaction time over the test length. Assessments Different scales are utilized by the neurologists to assess the observed impairments. For the spiral drawing performance, the neurologists rated firstly the ‘impairment’ using a 0 (no impairment) – 10 (extremely severe) scale, secondly three kinematic properties: ‘drawing speed’, ‘irregularity’ and ‘hesitation’ using a 0 (normal) – 4 (extremely severe) scale, and thirdly the probable ‘cause’ for the said impairment using 3 choices including Tremor, Bradykinesia/Rigidity and Dyskinesia. For the tapping performance, a 0 (normal) – 4 (extremely severe) scale is used for first rating four tapping properties: ‘tapping speed’, ‘accuracy’, ‘fatigue’, ‘arrhythmia’, and then the ‘global tapping severity’ (GTS). To achieve a common basis for assessment, initially one neurologist (DN) performed preliminary ratings by browsing through the database to collect and rate at least 20 samples of each GTS level and at least 33 samples of each ‘cause’ category. These preliminary ratings were then observed by the two neurologists (DN and PG) to be used as templates for rating of tests afterwards. In another track, the system randomly selected one test occasion per patient and visualized its items, that is tapping and spiral drawings, to the two neurologists. Statistical methods Inter-rater agreements were assessed using weighted Kappa coefficient. The internal consistency of properties of tapping and spiral drawing tests were assessed using Cronbach’s α test. One-way ANOVA test followed by Tukey multiple comparisons test was used to test if mean scores of properties of tapping and spiral drawing tests were different among GTS and ‘cause’ categories, respectively. Results When rating tapping graphs, inter-rater agreements (Kappa) were as follows: GTS (0.61), ‘tapping speed’ (0.89), ‘accuracy’ (0.66), ‘fatigue’ (0.57) and ‘arrhythmia’ (0.33). The poor inter-rater agreement when assessing “arrhythmia” may be as a result of observation of different things in the graphs, among the two raters. When rating animated spirals, both raters had very good agreement when assessing severity of spiral drawings, that is, ‘impairment’ (0.85) and irregularity (0.72). However, there were poor agreements between the two raters when assessing ‘cause’ (0.38) and time-information properties like ‘drawing speed’ (0.25) and ‘hesitation’ (0.21). Tapping properties, that is ‘tapping speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’ had satisfactory internal consistency with a Cronbach’s α coefficient of 0.77. In general, the trends of mean scores of tapping properties worsened with increasing levels of GTS. The mean scores of the four properties were significantly different to each other, only at different levels. In contrast from tapping properties, kinematic properties of spirals, that is ‘drawing speed’, ‘irregularity’ and ‘hesitation’ had a questionable consistency among them with a coefficient of 0.66. Bradykinetic spirals were associated with more impaired speed (mean = 83.7 % worse, P < 0.001) and hesitation (mean = 77.8% worse, P < 0.001), compared to dyskinetic spirals. Both these ‘cause’ categories had similar mean scores of ‘impairment’ and ‘irregularity’. Conclusions In contrast from current approaches used in clinical setting for the assessment of PD symptoms, this system enables clinicians to animate easily and realistically the ULMP of patients who at the same time are at their homes. Dynamic access of visualized motor tests may also be useful when observing and evaluating therapy-related complications such as under- and over-medications. In future, we foresee to utilize these manual ratings for developing and validating computer methods for automating the process of assessing ULMP of PD patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To define and evaluate a Computer-Vision (CV) method for scoring Paced Finger-Tapping (PFT) in Parkinson's disease (PD) using quantitative motion analysis of index-fingers and to compare the obtained scores to the UPDRS (Unified Parkinson's Disease Rating Scale) finger-taps (FT). Background: The naked-eye evaluation of PFT in clinical practice results in coarse resolution to determine PD status. Besides, sensor mechanisms for PFT evaluation may cause patients discomfort. In order to avoid cost and effort of applying wearable sensors, a CV system for non-invasive PFT evaluation is introduced. Methods: A database of 221 PFT videos from 6 PD patients was processed. The subjects were instructed to position their hands above their shoulders besides the face and tap the index-finger against the thumb consistently with speed. They were facing towards a pivoted camera during recording. The videos were rated by two clinicians between symptom levels 0-to-3 using UPDRS-FT. The CV method incorporates a motion analyzer and a face detector. The method detects the face of testee in each video-frame. The frame is split into two images from face-rectangle center. Two regions of interest are located in each image to detect index-finger motion of left and right hands respectively. The tracking of opening and closing phases of dominant hand index-finger produces a tapping time-series. This time-series is normalized by the face height. The normalization calibrates the amplitude in tapping signal which is affected by the varying distance between camera and subject (farther the camera, lesser the amplitude). A total of 15 features were classified using K-nearest neighbor (KNN) classifier to characterize the symptoms levels in UPDRS-FT. The target ratings provided by the raters were averaged. Results: A 10-fold cross validation in KNN classified 221 videos between 3 symptom levels with 75% accuracy. An area under the receiver operating characteristic curves of 82.6% supports feasibility of the obtained features to replicate clinical assessments. Conclusions: The system is able to track index-finger motion to estimate tapping symptoms in PD. It has certain advantages compared to other technologies (e.g. magnetic sensors, accelerometers etc.) for PFT evaluation to improve and automate the ratings