993 resultados para target classification


Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Gender information may serve to automatically modulate interaction to the user needs, among other applications. Within the Computer Vision community, gender classification (GC) has mainly been accomplished with the facial pattern. Periocular biometrics has recently attracted researchers attention with successful results in the context of identity recognition. But, there is a lack of experimental evaluation of the periocular pattern for GC in the wild. The aim of this paper is to study the performance of this specific facial area in the currently most challenging large dataset for the problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unconscious perception is commonly described as a phenomenon that is not under intentional control and relies on automatic processes. We challenge this view by arguing that some automatic processes may indeed be under intentional control, which is implemented in task-sets that define how the task is to be performed. In consequence, those prime attributes that are relevant to the task will be most effective. To investigate this hypothesis, we used a paradigm which has been shown to yield reliable short-lived priming in tasks based on semantic classification of words. This type of study uses fast, well practised classification responses, whereby responses to targets are much less accurate if prime and target belong to a different category than if they belong to the same category. In three experiments, we investigated whether the intention to classify the same words with respect to different semantic categories had a differential effect on priming. The results suggest that this was indeed the case: Priming varied with the task in all experiments. However, although participants reported not seeing the primes, they were able to classify the primes better than chance using the classification task they had used before with the targets. When a lexical task was used for discrimination in experiment 4, masked primes could however not be discriminated. Also, priming was as pronounced when the primes were visible as when they were invisible. The pattern of results suggests that participants had intentional control on prime processing, even if they reported not seeing the primes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modified American College of Cardiology/American Heart Association (ACC/AHA) lesion morphology classification scheme has prognostic impact for early and late outcomes when bare-metal stents are used. Its value after drug-eluting stent placement is unknown. The predictive value of this lesion morphology classification system in patients treated using sirolimus-eluting stents included in the German Cypher Registry was prospectively examined. The study population included 6,755 patients treated for 7,960 lesions using sirolimus-eluting stents. Lesions were classified as type A, B1, B2, or C. Lesion type A or B1 was considered simple (35.1%), and type B2 or C, complex (64.9%). The combined end point of all deaths, myocardial infarction, or target vessel revascularization was seen in 2.6% versus 2.4% in the complex and simple groups, respectively (p = 0.62) at initial hospital discharge, with a trend for higher rates of myocardial infarction in the complex group. At the 6-month clinical follow-up and after adjusting for other independent factors, the composite of cumulative death, myocardial infarction, and target vessel revascularization was nonsignificantly different between groups (11.4% vs 11.2% in the complex and simple groups, respectively; odds ratio 1.08, 95% confidence interval 0.8 to 1.46). This was also true for target vessel revascularization alone (8.3% of the complex group, 9.0% of the simple group; odds ratio 0.87, 95% confidence interval 0.72 to 1.05). In conclusion, the modified ACC/AHA lesion morphology classification system has some value in determining early complications after sirolimus-eluting stent implantation. Clinical follow-up results at 6 months were generally favorable and cannot be adequately differentiated on the basis of this lesion morphology classification scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: (i) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. (ii) The inclusion probabilities must be: (a) knowable for nonsampled units and (b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne Very High Resolution (VHR) images, where: (I) an original Categorical Variable Pair Similarity Index (CVPSI, proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and (II) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic MapperT (SIAMT) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAMT by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAMT pre-classification maps proposed in this contribution, together with OQIs claimed for SIAMT by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAMT software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems (GEOSS) initiative and the QA4EO international guidelines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Invasive vertebrate pests together with overabundant native species cause significant economic and environmental damage in the Australian rangelands. Access to artificial watering points, created for the pastoral industry, has been a major factor in the spread and survival of these pests. Existing methods of controlling watering points are mechanical and cannot discriminate between target species. This paper describes an intelligent system of controlling watering points based on machine vision technology. Initial test results clearly demonstrate proof of concept for machine vision in this application. These initial experiments were carried out as part of a 3-year project using machine vision software to manage all large vertebrates in the Australian rangelands. Concurrent work is testing the use of automated gates and innovative laneway and enclosure design. The system will have application in any habitat throughout the world where a resource is limited and can be enclosed for the management of livestock or wildlife.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

G protein-coupled receptors (GPCRs) play important physiological roles transducing extracellular signals into intracellular responses. Approximately 50% of all marketed drugs target a GPCR. There remains considerable interest in effectively predicting the function of a GPCR from its primary sequence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acknowledgements The authors thank the crews, fishers, and scientists who conducted the various surveys from which data were obtained. This work was supported by the Government of South Georgia and South Sandwich Islands. Additional logistical support provided by The South Atlantic Environmental Research Institute, with thanks to Paul Brickle. PF receives funding from the MASTS pooling initiative (TheMarine Alliance for Science and Technology for Scotland), and their support is gratefully acknowledged. MASTS is funded by the Scottish Funding Council (grant reference HR09011) and contributing institutions. SF is funded by the Natural Environment Research Council, and data were provided from the British Antarctic Survey Ecosystems Long-term Monitoring and Surveys programme as part of the BAS Polar Science for Planet Earth Programme. The authors also thank the anonymous referees for their helpful suggestions on an earlier version of this manuscript.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A two-stage hybrid model for data classification and rule extraction is proposed. The first stage uses a Fuzzy ARTMAP (FAM) classifier with Q-learning (known as QFAM) for incremental learning of data samples, while the second stage uses a Genetic Algorithm (GA) for rule extraction from QFAM. Given a new data sample, the resulting hybrid model, known as QFAM-GA, is able to provide prediction pertaining to the target class of the data sample as well as to give a fuzzy if-then rule to explain the prediction. To reduce the network complexity, a pruning scheme using Q-values is applied to reduce the number of prototypes generated by QFAM. A 'don't care' technique is employed to minimize the number of input features using the GA. A number of benchmark problems are used to evaluate the effectiveness of QFAM-GA in terms of test accuracy, noise tolerance, model complexity (number of rules and total rule length). The results are comparable, if not better, than many other models reported in the literature. The main significance of this research is a usable and useful intelligent model (i.e., QFAM-GA) for data classification in noisy conditions with the capability of yielding a set of explanatory rules with minimum antecedents. In addition, QFAM-GA is able to maximize accuracy and minimize model complexity simultaneously. The empirical outcome positively demonstrate the potential impact of QFAM-GA in the practical environment, i.e., providing an accurate prediction with a concise justification pertaining to the prediction to the domain users, therefore allowing domain users to adopt QFAM-GA as a useful decision support tool in assisting their decision-making processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Congenital vertebral malformations are common in brachycephalic “screw-tailed” dog breeds such as French bulldogs, English bulldogs, Boston terriers, and Pugs. Those vertebral malformations disrupt the normal vertebral column anatomy and biomechanics, potentially leading to deformity of the vertebral column and subsequent neurological dysfunction. The initial aim of this work was to study and determine whether the congenital vertebral malformations identified in those breeds could be translated in a radiographic classification scheme used in humans to give an improved classification, with clear and well-defined terminology, with the expectation that this would facilitate future study and clinical management in the veterinary field. Therefore, two observers who were blinded to the neurologic status of the dogs classified each vertebral malformation based on the human classification scheme of McMaster and were able to translate them successfully into a new classification scheme for veterinary use. The following aim was to assess the nature and the impact of vertebral column deformity engendered by those congenital vertebral malformations in the target breeds. As no gold standard exists in veterinary medicine for the calculation of the degree of deformity, it was elected to adapt the human equivalent, termed the Cobb angle, as a potential standard reference tool for use in veterinary practice. For the validation of the Cobb angle measurement method, a computerised semi-automatic technique was used and assessed by multiple independent observers. They observed not only that Kyphosis was the most common vertebral column deformity but also that patients with such deformity were found to be more likely to suffer from neurological deficits, more especially if their Cobb angle was above 35 degrees.