927 resultados para feature inspection method
Resumo:
The earliest stages of human cortical visual processing can be conceived as extraction of local stimulus features. However, more complex visual functions, such as object recognition, require integration of multiple features. Recently, neural processes underlying feature integration in the visual system have been under intensive study. A specialized mid-level stage preceding the object recognition stage has been proposed to account for the processing of contours, surfaces and shapes as well as configuration. This thesis consists of four experimental, psychophysical studies on human visual feature integration. In two studies, classification image a recently developed psychophysical reverse correlation method was used. In this method visual noise is added to near-threshold stimuli. By investigating the relationship between random features in the noise and observer s perceptual decision in each trial, it is possible to estimate what features of the stimuli are critical for the task. The method allows visualizing the critical features that are used in a psychophysical task directly as a spatial correlation map, yielding an effective "behavioral receptive field". Visual context is known to modulate the perception of stimulus features. Some of these interactions are quite complex, and it is not known whether they reflect early or late stages of perceptual processing. The first study investigated the mechanisms of collinear facilitation, where nearby collinear Gabor flankers increase the detectability of a central Gabor. The behavioral receptive field of the mechanism mediating the detection of the central Gabor stimulus was measured by the classification image method. The results show that collinear flankers increase the extent of the behavioral receptive field for the central Gabor, in the direction of the flankers. The increased sensitivity at the ends of the receptive field suggests a low-level explanation for the facilitation. The second study investigated how visual features are integrated into percepts of surface brightness. A novel variant of the classification image method with brightness matching task was used. Many theories assume that perceived brightness is based on the analysis of luminance border features. Here, for the first time this assumption was directly tested. The classification images show that the perceived brightness of both an illusory Craik-O Brien-Cornsweet stimulus and a real uniform step stimulus depends solely on the border. Moreover, the spatial tuning of the features remains almost constant when the stimulus size is changed, suggesting that brightness perception is based on the output of a single spatial frequency channel. The third and fourth studies investigated global form integration in random-dot Glass patterns. In these patterns, a global form can be immediately perceived, if even a small proportion of random dots are paired to dipoles according to a geometrical rule. In the third study the discrimination of orientation structure in highly coherent concentric and Cartesian (straight) Glass patterns was measured. The results showed that the global form was more efficiently discriminated in concentric patterns. The fourth study investigated how form detectability depends on the global regularity of the Glass pattern. The local structure was either Cartesian or curved. It was shown that randomizing the local orientation deteriorated the performance only with the curved pattern. The results give support for the idea that curved and Cartesian patterns are processed in at least partially separate neural systems.
Resumo:
Goals This study aims to map the effect of interrogative function on the intonation of spontaneous and read Finnish. Earlier research shows that the most prominent feature in Finnish question intonation is an appeal to the listener. Question word questions typically start with a high peak which is followed by falling intonation. In yes/no questions, F0 remains on a high level until the word carrying sentence stress and then falls. Final rises are mainly found in intonation clichés such as "Ai mitä?" ("What?") These earlier results are based on read speech and enacted dialogues. In this study, questions and statements found in spontaneous dialogues were compared. These utterances were also compared with read versions of the same utterances. Fundamental frequency values were compared using a mixed model. Contours were also grouped using auditory and visual inspection. Thus it was possible to compare frequencies of contour types according to utterance type and speech style. The position of questions in the F0 distribution of the whole material was also investigated in this study. Method The material consisted of four spontaneous dialogues and their read versions. The speakers were young adults from the Helsinki metropolitan area, four females and four males. The whole material was first divided into broad dialogue function categories arising from the material and F0 curves were calculated for each category. After this, 277 questions and 244 statements were selected for closer inspection. Values reflecting F0 distribution and contour shape were measured from the F0 contours of these utterances. A mixed model was used to analyse the differences. Utterance type, question type, speech style and speaker gender were used as fixed effects. The frequencies of F0 contour types were compared using a Chi square test. Additional material in this study came from eight young female speakers in central Finland. Results and conclusions In the mixed model analysis, significant differences were found both between questions and statements and between spontaneous and read speech. Generally, utterance type affected the variables reflecting contour type while speech style affected the variables reflecting F0 distribution. The effect of question type was not clearly visible. In read speech the contours resembled earlier results more closely. Speakers had different strategies in differentiating between questions and statements. In the whole material, F0 was slightly higher in questions than in statements. The effect of dialectal background could be seen in the contour types. The results show that interrogative function affects intonation in both spontaneous and read Finnish.
Resumo:
Laboratory confirmation methods are important in bovine cysticerosis diagnosis as other pathologies can result in morphologically similar lesions resulting in false identifications. We developed a probe-based real-time PCR assay to identify Taenia saginata in suspect cysts encountered at meat inspection and compared its use with the traditional method of identification, histology, as well as a published nested PCR. The assay simultaneously detects T. saginata DNA and a bovine internal control using the cytochrome c oxidase subunit 1 gene of each species and shows specificity against parasites causing lesions morphologically similar to those of T. saginata. The assay was sufficiently sensitive to detect 1 fg (Ct 35.09 +/- 0.95) of target DNA using serially-diluted plasmid DNA in reactions spiked with bovine DNA as well as in all viable and caseated positive control cysts. A loss in PCR sensitivity was observed with increasing cyst degeneration as seen in other molecular methods. In comparison to histology, the assay offered greater sensitivity and accuracy with 10/19 (53%) T. saginata positives detected by real-time PCR and none by histology. When the results were compared with the reference PCR, the assay was less sensitive but offered advantages of faster turnaround times and reduced contamination risk. Estimates of the assay's repeatability and reproducibility showed the assay is highly reliable with reliability coefficients greater than 0.94. Crown Copyright (C) 2013 Published by Elsevier B.V. All rights reserved.
Resumo:
User generated information such as product reviews have been booming due to the advent of web 2.0. In particular, rich information associated with reviewed products has been buried in such big data. In order to facilitate identifying useful information from product (e.g., cameras) reviews, opinion mining has been proposed and widely used in recent years. In detail, as the most critical step of opinion mining, feature extraction aims to extract significant product features from review texts. However, most existing approaches only find individual features rather than identifying the hierarchical relationships between the product features. In this paper, we propose an approach which finds both features and feature relationships, structured as a feature hierarchy which is referred to as feature taxonomy in the remainder of the paper. Specifically, by making use of frequent patterns and association rules, we construct the feature taxonomy to profile the product at multiple levels instead of single level, which provides more detailed information about the product. The experiment which has been conducted based upon some real world review datasets shows that our proposed method is capable of identifying product features and relations effectively.
Resumo:
This correspondence considers the problem of optimally controlling the thrust steering angle of an ion-propelled spaceship so as to effect a minimum time coplanar orbit transfer from the mean orbital distance of Earth to mean Martian and Venusian orbital distances. This problem has been modelled as a free terminal time-optimal control problem with unbounded control variable and with state variable equality constraints at the final time. The problem has been solved by the penalty function approach, using the conjugate gradient algorithm. In general, the optimal solution shows a significant departure from earlier work. In particular, the optimal control in the case of Earth-Mars orbit transfer, during the initial phase of the spaceship's flight, is found to be negative, resulting in the motion of the spaceship within the Earth's orbit for a significant fraction of the total optimized orbit transfer time. Such a feature exhibited by the optimal solution has not been reported at all by earlier investigators of this problem.
Resumo:
Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.
Resumo:
The application of computer-aided inspection integrated with the coordinate measuring machine and laser scanners to inspect manufactured aircraft parts using robust registration of two-point datasets is a subject of active research in computational metrology. This paper presents a novel approach to automated inspection by matching shapes based on the modified iterative closest point (ICP) method to define a criterion for the acceptance or rejection of a part. This procedure improves upon existing methods by doing away with the following, viz., the need for constructing either a tessellated or smooth representation of the inspected part and requirements for an a priori knowledge of approximate registration and correspondence between the points representing the computer-aided design datasets and the part to be inspected. In addition, this procedure establishes a better measure for error between the two matched datasets. The use of localized region-based triangulation is proposed for tracking the error. The approach described improves the convergence of the ICP technique with a dramatic decrease in computational effort. Experimental results obtained by implementing this proposed approach using both synthetic and practical data show that the present method is efficient and robust. This method thereby validates the algorithm, and the examples demonstrate its potential to be used in engineering applications.
Resumo:
Non-standard finite difference methods (NSFDM) introduced by Mickens [Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers–Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791–797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250–2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235–276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter (λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Resumo:
We describe a novel method for human activity segmentation and interpretation in surveillance applications based on Gabor filter-bank features. A complex human activity is modeled as a sequence of elementary human actions like walking, running, jogging, boxing, hand-waving etc. Since human silhouette can be modeled by a set of rectangles, the elementary human actions can be modeled as a sequence of a set of rectangles with different orientations and scales. The activity segmentation is based on Gabor filter-bank features and normalized spectral clustering. The feature trajectories of an action category are learnt from training example videos using dynamic time warping. The combined segmentation and the recognition processes are very efficient as both the algorithms share the same framework and Gabor features computed for the former can be used for the later. We have also proposed a simple shadow detection technique to extract good silhouette which is necessary for good accuracy of an action recognition technique.
Resumo:
The aim of this study was to develop and trial a method to monitor the evolution of clinical reasoning in a PBL curriculum that is suitable for use in a large medical school. Termed Clinical Reasoning Problems (CRPs), it is based on the notion that clinical reasoning is dependent on the identification and correct interpretation of certain critical clinical features. Each problem consists of a clinical scenario comprising presentation, history and physical examination. Based on this information, subjects are asked to nominate the two most likely diagnoses and to list the clinical features that they considered in formulating their diagnoses, indicating whether these features supported or opposed the nominated diagnoses. Students at different levels of medical training completed a set of 10 CRPs as well as the Diagnostic Thinking Inventory, a self-reporting questionnaire designed to assess reasoning style. Responses were scored against those of a reference group of general practitioners. Results indicate that the CRPs are an easily administered, reliable and valid assessment of clinical reasoning, able to successfully monitor its development throughout medical training. Consequently, they can be employed to assess clinical reasoning skill in individual students and to evaluate the success of undergraduate medical schools in providing effective tuition in clinical reasoning.
Resumo:
Properties of nanoparticles are size dependent, and a model to predict particle size is of importance. Gold nanoparticles are commonly synthesized by reducing tetrachloroauric acid with trisodium citrate, a method pioneered by Turkevich et al (Discuss. Faraday Soc. 1951, 11, 55). Data from several investigators that used this method show that when the ratio of initial concentrations of citrate to gold is varied from 0.4 to similar to 2, the final mean size of the particles formed varies by a factor of 7, while subsequent increases in the ratio hardly have any effect on the size. In this paper, a model is developed to explain this widely varying dependence. The steps that lead to the formation of particles are as follows: reduction of Au3+ in solution, disproportionation of Au+ to gold atoms and their nucleation, growth by disproportionation on particle surface, and coagulation. Oxidation of citrate results in the formation of dicarboxy acetone, which aids nucleation but also decomposes into side products. A detailed kinetic model is developed on the basis of these steps and is combined with population balance to predict particle-size distribution. The model shows that, unlike the usual balance between nucleation and growth that determines the particle size, it is the balance between rate of nucleation and degradation of dicarboxy acetone that determines the particle size in the citrate process. It is this feature that is able to explain the unusual dependence of the mean particle size on the ratio of citrate to gold salt concentration. It is also found that coagulation plays an important role in determining the particle size at high concentrations of citrate.
Resumo:
The weighted-least-squares method based on the Gauss-Newton minimization technique is used for parameter estimation in water distribution networks. The parameters considered are: element resistances (single and/or group resistances, Hazen-Williams coefficients, pump specifications) and consumptions (for single or multiple loading conditions). The measurements considered are: nodal pressure heads, pipe flows, head loss in pipes, and consumptions/inflows. An important feature of the study is a detailed consideration of the influence of different choice of weights on parameter estimation, for error-free data, noisy data, and noisy data which include bad data. The method is applied to three different networks including a real-life problem.
Resumo:
Motion Estimation is one of the most power hungry operations in video coding. While optimal search (eg. full search)methods give best quality, non optimal methods are often used in order to reduce cost and power. Various algorithms have been used in practice that trade off quality vs. complexity. Global elimination is an algorithm based on pixel averaging to reduce complexity of motion search while keeping performance close to that of full search. We propose an adaptive version of the global elimination algorithm that extracts individual macro-block features using Hadamard transform to optimize the search. Performance achieved is close to the full search method and global elimination. Operational complexity and hence power is reduced by 30% to 45% compared to global elimination method.
Resumo:
A combination of chemical and thermal annealing techniques has been employed to synthesize a rarely reported nanocup structure of Mn doped ZnO with good yield. Nanocup structures are obtained by thermally annealing the powder samples consisting of nanosheets, synthesized chemically at room temperature, isochronally in a furnace at 200-500 degrees C temperature range for 2 h. Strong excitonic absorption in the UV and photoluminescence (PL) emission in UV-visible regions are observed in all the samples at room temperature. The sample obtained at 300 degrees C annealing temperature exhibits strong PL emission in the UV due to near-band-edge emission along with very week defect related emissions in the visible regions. The synthesized samples have been found to be exhibiting stable optical properties for 10 months which proved the unique feature of the presented technique of synthesis of nanocup structures. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Structural Support Vector Machines (SSVMs) have recently gained wide prominence in classifying structured and complex objects like parse-trees, image segments and Part-of-Speech (POS) tags. Typical learning algorithms used in training SSVMs result in model parameters which are vectors residing in a large-dimensional feature space. Such a high-dimensional model parameter vector contains many non-zero components which often lead to slow prediction and storage issues. Hence there is a need for sparse parameter vectors which contain a very small number of non-zero components. L1-regularizer and elastic net regularizer have been traditionally used to get sparse model parameters. Though L1-regularized structural SVMs have been studied in the past, the use of elastic net regularizer for structural SVMs has not been explored yet. In this work, we formulate the elastic net SSVM and propose a sequential alternating proximal algorithm to solve the dual formulation. We compare the proposed method with existing methods for L1-regularized Structural SVMs. Experiments on large-scale benchmark datasets show that the proposed dual elastic net SSVM trained using the sequential alternating proximal algorithm scales well and results in highly sparse model parameters while achieving a comparable generalization performance. Hence the proposed sequential alternating proximal algorithm is a competitive method to achieve sparse model parameters and a comparable generalization performance when elastic net regularized Structural SVMs are used on very large datasets.