924 resultados para error model
Resumo:
Présentation: Cet article a été publié dans le journal : Computerised medical imaging and graphics (CMIG). Le but de cet article est de recaler les vertèbres extraites à partir d’images RM avec des vertèbres extraites à partir d’images RX pour des patients scoliotiques, en tenant compte des déformations non-rigides due au changement de posture entre ces deux modalités. À ces fins, une méthode de recalage à l’aide d’un modèle articulé est proposée. Cette méthode a été comparée avec un recalage rigide en calculant l’erreur sur des points de repère, ainsi qu’en calculant la différence entre l’angle de Cobb avant et après recalage. Une validation additionelle de la méthode de recalage présentée ici se trouve dans l’annexe A. Ce travail servira de première étape dans la fusion des images RM, RX et TP du tronc complet. Donc, cet article vérifie l’hypothèse 1 décrite dans la section 3.2.1.
Resumo:
This paper presents a method based on articulated models for the registration of spine data extracted from multimodal medical images of patients with scoliosis. With the ultimate aim being the development of a complete geometrical model of the torso of a scoliotic patient, this work presents a method for the registration of vertebral column data using 3D magnetic resonance images (MRI) acquired in prone position and X-ray data acquired in standing position for five patients with scoliosis. The 3D shape of the vertebrae is estimated from both image modalities for each patient, and an articulated model is used in order to calculate intervertebral transformations required in order to align the vertebrae between both postures. Euclidean distances between anatomical landmarks are calculated in order to assess multimodal registration error. Results show a decrease in the Euclidean distance using the proposed method compared to rigid registration and more physically realistic vertebrae deformations compared to thin-plate-spline (TPS) registration thus improving alignment.
Resumo:
The need for reliable predictions of the solar activity cycle motivates the development of dynamo models incorporating a representation of surface processes sufficiently detailed to allow assimilation of magnetographic data. In this series of papers we present one such dynamo model, and document its behavior and properties. This first paper focuses on one of the model's key components, namely surface magnetic flux evolution. Using a genetic algorithm, we obtain best-fit parameters of the transport model by least-squares minimization of the differences between the associated synthetic synoptic magnetogram and real magnetographic data for activity cycle 21. Our fitting procedure also returns Monte Carlo-like error estimates. We show that the range of acceptable surface meridional flow profiles is in good agreement with Doppler measurements, even though the latter are not used in the fitting process. Using a synthetic database of bipolar magnetic region (BMR) emergences reproducing the statistical properties of observed emergences, we also ascertain the sensitivity of global cycle properties, such as the strength of the dipole moment and timing of polarity reversal, to distinct realizations of BMR emergence, and on this basis argue that this stochasticity represents a primary source of uncertainty for predicting solar cycle characteristics.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
Severe local storms, including tornadoes, damaging hail and wind gusts, frequently occur over the eastern and northeastern states of India during the pre-monsoon season (March-May). Forecasting thunderstorms is one of the most difficult tasks in weather prediction, due to their rather small spatial and temporal extension and the inherent non-linearity of their dynamics and physics. In this paper, sensitivity experiments are conducted with the WRF-NMM model to test the impact of convective parameterization schemes on simulating severe thunderstorms that occurred over Kolkata on 20 May 2006 and 21 May 2007 and validated the model results with observation. In addition, a simulation without convective parameterization scheme was performed for each case to determine if the model could simulate the convection explicitly. A statistical analysis based on mean absolute error, root mean square error and correlation coefficient is performed for comparisons between the simulated and observed data with different convective schemes. This study shows that the prediction of thunderstorm affected parameters is sensitive to convective schemes. The Grell-Devenyi cloud ensemble convective scheme is well simulated the thunderstorm activities in terms of time, intensity and the region of occurrence of the events as compared to other convective schemes and also explicit scheme
Resumo:
Object recognition is complicated by clutter, occlusion, and sensor error. Since pose hypotheses are based on image feature locations, these effects can lead to false negatives and positives. In a typical recognition algorithm, pose hypotheses are tested against the image, and a score is assigned to each hypothesis. We use a statistical model to determine the score distribution associated with correct and incorrect pose hypotheses, and use binary hypothesis testing techniques to distinguish between them. Using this approach we can compare algorithms and noise models, and automatically choose values for internal system thresholds to minimize the probability of making a mistake.
Resumo:
We describe a method for modeling object classes (such as faces) using 2D example images and an algorithm for matching a model to a novel image. The object class models are "learned'' from example images that we call prototypes. In addition to the images, the pixelwise correspondences between a reference prototype and each of the other prototypes must also be provided. Thus a model consists of a linear combination of prototypical shapes and textures. A stochastic gradient descent algorithm is used to match a model to a novel image by minimizing the error between the model and the novel image. Example models are shown as well as example matches to novel images. The robustness of the matching algorithm is also evaluated. The technique can be used for a number of applications including the computation of correspondence between novel images of a certain known class, object recognition, image synthesis and image compression.
Resumo:
We describe a technique for finding pixelwise correspondences between two images by using models of objects of the same class to guide the search. The object models are 'learned' from example images (also called prototypes) of an object class. The models consist of a linear combination ofsprototypes. The flow fields giving pixelwise correspondences between a base prototype and each of the other prototypes must be given. A novel image of an object of the same class is matched to a model by minimizing an error between the novel image and the current guess for the closest modelsimage. Currently, the algorithm applies to line drawings of objects. An extension to real grey level images is discussed.
Resumo:
Support Vector Machines Regression (SVMR) is a regression technique which has been recently introduced by V. Vapnik and his collaborators (Vapnik, 1995; Vapnik, Golowich and Smola, 1996). In SVMR the goodness of fit is measured not by the usual quadratic loss function (the mean square error), but by a different loss function called Vapnik"s $epsilon$- insensitive loss function, which is similar to the "robust" loss functions introduced by Huber (Huber, 1981). The quadratic loss function is well justified under the assumption of Gaussian additive noise. However, the noise model underlying the choice of Vapnik's loss function is less clear. In this paper the use of Vapnik's loss function is shown to be equivalent to a model of additive and Gaussian noise, where the variance and mean of the Gaussian are random variables. The probability distributions for the variance and mean will be stated explicitly. While this work is presented in the framework of SVMR, it can be extended to justify non-quadratic loss functions in any Maximum Likelihood or Maximum A Posteriori approach. It applies not only to Vapnik's loss function, but to a much broader class of loss functions.
Resumo:
Building robust recognition systems requires a careful understanding of the effects of error in sensed features. Error in these image features results in a region of uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty region when model poses are based on matching three image and model points, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. This result applies to objects that are fully three- dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute the uncertainty region when poses are based on any number of initial matches. Finally, we use these results to extend, from two-dimensional to three- dimensional objects, robust implementations of alignmentt interpretation- tree search, and ransformation clustering.
Resumo:
We contribute a quantitative and systematic model to capture etch non-uniformity in deep reactive ion etch of microelectromechanical systems (MEMS) devices. Deep reactive ion etch is commonly used in MEMS fabrication where high-aspect ratio features are to be produced in silicon. It is typical for many supposedly identical devices, perhaps of diameter 10 mm, to be etched simultaneously into one silicon wafer of diameter 150 mm. Etch non-uniformity depends on uneven distributions of ion and neutral species at the wafer level, and on local consumption of those species at the device, or die, level. An ion–neutral synergism model is constructed from data obtained from etching several layouts of differing pattern opening densities. Such a model is used to predict wafer-level variation with an r.m.s. error below 3%. This model is combined with a die-level model, which we have reported previously, on a MEMS layout. The two-level model is shown to enable prediction of both within-die and wafer-scale etch rate variation for arbitrary wafer loadings.
Resumo:
Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression
Resumo:
Este documento estima modelos lineales y no-lineales de corrección de errores para los precios spot de cuatro tipos de café. En concordancia con las leyes económicas, se encuentra evidencia que cuando los precios están por encima de su nivel de equilibrio, retornan a éste mas lentamente que cuando están por debajo. Esto puede reflejar el hecho que, en el corto plazo, para los países productores de café es mas fácil restringir la oferta para incrementar precios, que incrementarla para reducirlos. Además, se encuentra evidencia que el ajuste es más rápido cuando las desviaciones del equilibrio son mayores. Los pronósticos que se obtienen a partir de los modelos de corrección de errores no lineales y asimétricos considerados en el trabajo, ofrecen una leve mejoría cuando se comparan con los pronósticos que resultan de un modelo de paseo aleatorio.
Resumo:
Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper
Resumo:
Two wavelet-based control variable transform schemes are described and are used to model some important features of forecast error statistics for use in variational data assimilation. The first is a conventional wavelet scheme and the other is an approximation of it. Their ability to capture the position and scale-dependent aspects of covariance structures is tested in a two-dimensional latitude-height context. This is done by comparing the covariance structures implied by the wavelet schemes with those found from the explicit forecast error covariance matrix, and with a non-wavelet- based covariance scheme used currently in an operational assimilation scheme. Qualitatively, the wavelet-based schemes show potential at modeling forecast error statistics well without giving preference to either position or scale-dependent aspects. The degree of spectral representation can be controlled by changing the number of spectral bands in the schemes, and the least number of bands that achieves adequate results is found for the model domain used. Evidence is found of a trade-off between the localization of features in positional and spectral spaces when the number of bands is changed. By examining implied covariance diagnostics, the wavelet-based schemes are found, on the whole, to give results that are closer to diagnostics found from the explicit matrix than from the nonwavelet scheme. Even though the nature of the covariances has the right qualities in spectral space, variances are found to be too low at some wavenumbers and vertical correlation length scales are found to be too long at most scales. The wavelet schemes are found to be good at resolving variations in position and scale-dependent horizontal length scales, although the length scales reproduced are usually too short. The second of the wavelet-based schemes is often found to be better than the first in some important respects, but, unlike the first, it has no exact inverse transform.