164 resultados para Computer aided analysis, Machine vision, Video surveillance
Resumo:
The relationship between coronal knee laxity and the restraining properties of the collateral ligaments remains unknown. This study investigated correlations between the structural properties of the collateral ligaments and stress angles used in computer-assisted total knee arthroplasty (TKA), measured with an optically based navigation system. Ten fresh-frozen cadaveric knees (mean age: 81 ± 11 years) were dissected to leave the menisci, cruciate ligaments, posterior joint capsule and collateral ligaments. The resected femur and tibia were rigidly secured within a test system which permitted kinematic registration of the knee using a commercially available image-free navigation system. Frontal plane knee alignment and varus-valgus stress angles were acquired. The force applied during varus-valgus testing was quantified. Medial and lateral bone-collateral ligament-bone specimens were then prepared, mounted within a uni-axial materials testing machine, and extended to failure. Force and displacement data were used to calculate the principal structural properties of the ligaments. The mean varus laxity was 4 ± 1° and the mean valgus laxity was 4 ± 2°. The corresponding mean manual force applied was 10 ± 3 N and 11 ± 4 N, respectively. While measures of knee laxity were independent of the ultimate tensile strength and stiffness of the collateral ligaments, there was a significant correlation between the force applied during stress testing and the instantaneous stiffness of the medial (r = 0.91, p = 0.001) and lateral (r = 0.68, p = 0.04) collateral ligaments. These findings suggest that clinicians may perceive a rate of change of ligament stiffness as the end-point during assessment of collateral knee laxity.
Resumo:
Machine vision is emerging as a viable sensing approach for mid-air collision avoidance (particularly for small to medium aircraft such as unmanned aerial vehicles). In this paper, using relative entropy rate concepts, we propose and investigate a new change detection approach that uses hidden Markov model filters to sequentially detect aircraft manoeuvres from morphologically processed image sequences. Experiments using simulated and airborne image sequences illustrate the performance of our proposed algorithm in comparison to other sequential change detection approaches applied to this application.
Resumo:
Multiplayer Dynamic Difficulty Adjustment (mDDA) is a method of reducing the difference in player performance and subsequent challenge in competitive multiplayer video games. As a balance of between player skill and challenge experienced is necessary for optimal player experience, this experimental study investigates the effects of mDDA and awareness of its presence on player performance and experience using subjective and biometric measures. Early analysis indicates that mDDA normalizes performance and challenge as expected, but awareness of its presence can reduce its effectiveness.
Resumo:
Age-related macular degeneration (AMD) affects the central vision and subsequently may lead to visual loss in people over 60 years of age. There is no permanent cure for AMD, but early detection and successive treatment may improve the visual acuity. AMD is mainly classified into dry and wet type; however, dry AMD is more common in aging population. AMD is characterized by drusen, yellow pigmentation, and neovascularization. These lesions are examined through visual inspection of retinal fundus images by ophthalmologists. It is laborious, time-consuming, and resource-intensive. Hence, in this study, we have proposed an automated AMD detection system using discrete wavelet transform (DWT) and feature ranking strategies. The first four-order statistical moments (mean, variance, skewness, and kurtosis), energy, entropy, and Gini index-based features are extracted from DWT coefficients. We have used five (t test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance, receiver operating characteristics curve-based, and Wilcoxon) feature ranking strategies to identify optimal feature set. A set of supervised classifiers namely support vector machine (SVM), decision tree, k -nearest neighbor ( k -NN), Naive Bayes, and probabilistic neural network were used to evaluate the highest performance measure using minimum number of features in classifying normal and dry AMD classes. The proposed framework obtained an average accuracy of 93.70 %, sensitivity of 91.11 %, and specificity of 96.30 % using KLD ranking and SVM classifier. We have also formulated an AMD Risk Index using selected features to classify the normal and dry AMD classes using one number. The proposed system can be used to assist the clinicians and also for mass AMD screening programs.
Resumo:
This thesis addressed issues that have prevented qualitative researchers from using thematic discovery algorithms. The central hypothesis evaluated whether allowing qualitative researchers to interact with thematic discovery algorithms and incorporate domain knowledge improved their ability to address research questions and trust the derived themes. Non-negative Matrix Factorisation and Latent Dirichlet Allocation find latent themes within document collections but these algorithms are rarely used, because qualitative researchers do not trust and cannot interact with the themes that are automatically generated. The research determined the types of interactivity that qualitative researchers require and then evaluated interactive algorithms that matched these requirements. Theoretical contributions included the articulation of design guidelines for interactive thematic discovery algorithms, the development of an Evaluation Model and a Conceptual Framework for Interactive Content Analysis.
Resumo:
One main challenge in developing a system for visual surveillance event detection is the annotation of target events in the training data. By making use of the assumption that events with security interest are often rare compared to regular behaviours, this paper presents a novel approach by using Kullback-Leibler (KL) divergence for rare event detection in a weakly supervised learning setting, where only clip-level annotation is available. It will be shown that this approach outperforms state-of-the-art methods on a popular real-world dataset, while preserving real time performance.
Resumo:
Power line inspection is a vital function for electricity supply companies but it involves labor-intensive and expensive procedures which are tedious and error-prone for humans to perform. A possible solution is to use an unmanned aerial vehicle (UAV) equipped with video surveillance equipment to perform the inspection. This paper considers how a small, electrically driven rotorcraft conceived for this application could be controlled by visually tracking the overhead supply lines. A dynamic model for a ducted-fan rotorcraft is presented and used to control the action of an Air Vehicle Simulator (AVS), consisting of a cable-array robot. Results show how visual data can be used to determine, and hence regulate in closed loop, the simulated vehicle’s position relative to the overhead lines.
Resumo:
Diabetic macular edema (DME) is one of the most common causes of visual loss among diabetes mellitus patients. Early detection and successive treatment may improve the visual acuity. DME is mainly graded into non-clinically significant macular edema (NCSME) and clinically significant macular edema according to the location of hard exudates in the macula region. DME can be identified by manual examination of fundus images. It is laborious and resource intensive. Hence, in this work, automated grading of DME is proposed using higher-order spectra (HOS) of Radon transform projections of the fundus images. We have used third-order cumulants and bispectrum magnitude, in this work, as features, and compared their performance. They can capture subtle changes in the fundus image. Spectral regression discriminant analysis (SRDA) reduces feature dimension, and minimum redundancy maximum relevance method is used to rank the significant SRDA components. Ranked features are fed to various supervised classifiers, viz. Naive Bayes, AdaBoost and support vector machine, to discriminate No DME, NCSME and clinically significant macular edema classes. The performance of our system is evaluated using the publicly available MESSIDOR dataset (300 images) and also verified with a local dataset (300 images). Our results show that HOS cumulants and bispectrum magnitude obtained an average accuracy of 95.56 and 94.39 % for MESSIDOR dataset and 95.93 and 93.33 % for local dataset, respectively.
Resumo:
It is not uncommon to hear a person of interest described by their height, build, and clothing (i.e. type and colour). These semantic descriptions are commonly used by people to describe others, as they are quick to communicate and easy to understand. However such queries are not easily utilised within intelligent video surveillance systems, as they are difficult to transform into a representation that can be utilised by computer vision algorithms. In this paper we propose a novel approach that transforms such a semantic query into an avatar in the form of a channel representation that is searchable within a video stream. We show how spatial, colour and prior information (person shape) can be incorporated into the channel representation to locate a target using a particle-filter like approach. We demonstrate state-of-the-art performance for locating a subject in video based on a description, achieving a relative performance improvement of 46.7% over the baseline. We also apply this approach to person re-detection, and show that the approach can be used to re-detect a person in a video steam without the use of person detection.
Resumo:
Video surveillance infrastructure has been widely installed in public places for security purposes. However, live video feeds are typically monitored by human staff, making the detection of important events as they occur difficult. As such, an expert system that can automatically detect events of interest in surveillance footage is highly desirable. Although a number of approaches have been proposed, they have significant limitations: supervised approaches, which can detect a specific event, ideally require a large number of samples with the event spatially and temporally localised; while unsupervised approaches, which do not require this demanding annotation, can only detect whether an event is abnormal and not specific event types. To overcome these problems, we formulate a weakly-supervised approach using Kullback-Leibler (KL) divergence to detect rare events. The proposed approach leverages the sparse nature of the target events to its advantage, and we show that this data imbalance guarantees the existence of a decision boundary to separate samples that contain the target event from those that do not. This trait, combined with the coarse annotation used by weakly supervised learning (that only indicates approximately when an event occurs), greatly reduces the annotation burden while retaining the ability to detect specific events. Furthermore, the proposed classifier requires only a decision threshold, simplifying its use compared to other weakly supervised approaches. We show that the proposed approach outperforms state-of-the-art methods on a popular real-world traffic surveillance dataset, while preserving real time performance.
Resumo:
This paper summarises the development of a machine-readable model series for explaining Gaudi's use of ruled surface geometry in the Sagrada Familia in Barcelona, Spain. The first part discusses the modeling methods underlying the columns of the cathedral and the techniques required to translate them into built structures. The second part discusses the design and development of a tangible machine-readable model to explain column-modeling methods interactively in educational contexts such as art exhibitions. It is designed to explain the principles underlying the column design by means of physical interaction without using mathematical terms or language.
Resumo:
John Frazer's architectural work is inspired by living and generative processes. Both evolutionary and revolutionary, it explores informatin ecologies and the dynamics of the spaces between objects. Fuelled by an interest in the cybernetic work of Gordon Pask and Norbert Wiener, and the possibilities of the computer and the "new science" it has facilitated, Frazer and his team of collaborators have conducted a series of experiments that utilize genetic algorithms, cellular automata, emergent behaviour, complexity and feedback loops to create a truly dynamic architecture. Frazer studied at the Architectural Association (AA) in London from 1963 to 1969, and later became unit master of Diploma Unit 11 there. He was subsequently Director of Computer-Aided Design at the University of Ulter - a post he held while writing An Evolutionary Architecture in 1995 - and a lecturer at the University of Cambridge. In 1983 he co-founded Autographics Software Ltd, which pioneered microprocessor graphics. Frazer was awarded a person chair at the University of Ulster in 1984. In Frazer's hands, architecture becomes machine-readable, formally open-ended and responsive. His work as computer consultant to Cedric Price's Generator Project of 1976 (see P84)led to the development of a series of tools and processes; these have resulted in projects such as the Calbuild Kit (1985) and the Universal Constructor (1990). These subsequent computer-orientated architectural machines are makers of architectural form beyond the full control of the architect-programmer. Frazer makes much reference to the multi-celled relationships found in nature, and their ongoing morphosis in response to continually changing contextual criteria. He defines the elements that describe his evolutionary architectural model thus: "A genetic code script, rules for the development of the code, mapping of the code to a virtual model, the nature of the environment for the development of the model and, most importantly, the criteria for selection. In setting out these parameters for designing evolutionary architectures, Frazer goes beyond the usual notions of architectural beauty and aesthetics. Nevertheless his work is not without an aesthetic: some pieces are a frenzy of mad wire, while others have a modularity that is reminiscent of biological form. Algorithms form the basis of Frazer's designs. These algorithms determine a variety of formal results dependent on the nature of the information they are given. His work, therefore, is always dynamic, always evolving and always different. Designing with algorithms is also critical to other architects featured in this book, such as Marcos Novak (see p150). Frazer has made an unparalleled contribution to defining architectural possibilities for the twenty-first century, and remains an inspiration to architects seeking to create responsive environments. Architects were initially slow to pick up on the opportunities that the computer provides. These opportunities are both representational and spatial: computers can help architects draw buildings and, more importantly, they can help architects create varied spaces, both virtual and actual. Frazer's work was groundbreaking in this respect, and well before its time.
Resumo:
Searching for humans lost in vast stretches of ocean has always been a difficult task. This paper investigates a machine vision system that addresses this problem by exploiting the useful properties of alternate colour spaces. In particular, the paper investigates the fusion of colour information from the HSV, RGB, YCbCr and YIQ colour spaces within the emission matrix of a Hidden Markov Model tracker to enhance video based maritime target detection. The system has shown promising results. The paper also identifies challenges still needing to be met.
Resumo:
LCADesign software package is a real-time environmental impact calculator for commercial property that works directly from the building designer's model. It enables developers, building designers, architects, engineers, builders, manufacturers and government bodies to optimise the eco-impact of a building as the design model evolves instead of waiting months for expert analysis. By integrating with the Building Information Models (BIMs) generated by 3D computer-aided drafting, LCADesign builds eco-efficiency into the design stage and measures the environmental values and risks of materials in commercial buildings
Resumo:
This paper discusses the preliminary findings of an ongoing research project aimed at developing a technological, operational and strategic analysis of adopting BIM in AEC/FM (Architecture-Engineering-Construction/Facility Management) industry as a collaboration tool. Outcomes of the project will provide specifications and guidelines as well as establish industry standards for implementing BIM in practice. This research primarily focuses on BIM model servers as a collaboration platform, and hence the guidelines are aimed at enhancing collaboration capabilities. This paper reports on the findings from: (1) a critical review of latest BIM literature and commercial applications, and (2) workshops with focus groups on changing work-practice, role of technology, current perception and expectations of BIM. Layout for case studies being undertaken is presented. These findings provide a base to develop comprehensive software specifications and national guidelines for BIM with particular emphasis on BIM model servers as collaboration platforms.