908 resultados para Error Correction Model
Resumo:
Dissertação de Mestrado, Oncobiologia - Mecanismos Moleculares do Cancro, Departamento de Ciências Biomédicas e Medicina, Universidade do Algarve, 2016
Resumo:
El presente documento analiza los determinantes del margen de intermediación para el sistema financiero colombiano entre 1989 y 2003. Bajo una estimación dinámica de los efectos generados por variables específicas de actividad, impuestos y estructura de mercado, se presenta un seguimiento del margen de intermediación financiero, para un período que presenta elementos de liberalización y crisis.
Resumo:
Assimilation of temperature observations into an ocean model near the equator often results in a dynamically unbalanced state with unrealistic overturning circulations. The way in which these circulations arise from systematic errors in the model or its forcing is discussed. A scheme is proposed, based on the theory of state augmentation, which uses the departures of the model state from the observations to update slowly evolving bias fields. Results are summarized from an experiment applying this bias correction scheme to an ocean general circulation model. They show that the method produces more balanced analyses and a better fit to the temperature observations.
Resumo:
Data assimilation aims to incorporate measured observations into a dynamical system model in order to produce accurate estimates of all the current (and future) state variables of the system. The optimal estimates minimize a variational principle and can be found using adjoint methods. The model equations are treated as strong constraints on the problem. In reality, the model does not represent the system behaviour exactly and errors arise due to lack of resolution and inaccuracies in physical parameters, boundary conditions and forcing terms. A technique for estimating systematic and time-correlated errors as part of the variational assimilation procedure is described here. The modified method determines a correction term that compensates for model error and leads to improved predictions of the system states. The technique is illustrated in two test cases. Applications to the 1-D nonlinear shallow water equations demonstrate the effectiveness of the new procedure.
Resumo:
The aim of this study was to obtain the exact value of the keratometric index (nkexact) and to clinically validate a variable keratometric index (nkadj) that minimizes this error. Methods: The nkexact value was determined by obtaining differences (DPc) between keratometric corneal power (Pk) and Gaussian corneal power (PGauss c ) equal to 0. The nkexact was defined as the value associated with an equivalent difference in the magnitude of DPc for extreme values of posterior corneal radius (r2c) for each anterior corneal radius value (r1c). This nkadj was considered for the calculation of the adjusted corneal power (Pkadj). Values of r1c ∈ (4.2, 8.5) mm and r2c ∈ (3.1, 8.2) mm were considered. Differences of True Net Power with PGauss c , Pkadj, and Pk(1.3375) were calculated in a clinical sample of 44 eyes with keratoconus. Results: nkexact ranged from 1.3153 to 1.3396 and nkadj from 1.3190 to 1.3339 depending on the eye model analyzed. All the nkadj values adjusted perfectly to 8 linear algorithms. Differences between Pkadj and PGauss c did not exceed 60.7 D (Diopter). Clinically, nk = 1.3375 was not valid in any case. Pkadj and True Net Power and Pk(1.3375) and Pkadj were statistically different (P , 0.01), whereas no differences were found between PGauss c and Pkadj (P . 0.01). Conclusions: The use of a single value of nk for the calculation of the total corneal power in keratoconus has been shown to be imprecise, leading to inaccuracies in the detection and classification of this corneal condition. Furthermore, our study shows the relevance of corneal thickness in corneal power calculations in keratoconus.
Model-based procedure for scale-up of wet, overflow ball mills - Part III: Validation and discussion
Resumo:
A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Pectus excavatum is the most common deformity of the thorax. A minimally invasive surgical correction is commonly carried out to remodel the anterior chest wall by using an intrathoracic convex prosthesis in the substernal position. The process of prosthesis modeling and bending still remains an area of improvement. The authors developed a new system, i3DExcavatum, which can automatically model and bend the bar preoperatively based on a thoracic CT scan. This article presents a comparison between automatic and manual bending. The i3DExcavatum was used to personalize prostheses for 41 patients who underwent pectus excavatum surgical correction between 2007 and 2012. Regarding the anatomical variations, the soft-tissue thicknesses external to the ribs show that both symmetric and asymmetric patients always have asymmetric variations, by comparing the patients’ sides. It highlighted that the prosthesis bar should be modeled according to each patient’s rib positions and dimensions. The average differences between the skin and costal line curvature lengths were 84 ± 4 mm and 96 ± 11 mm, for male and female patients, respectively. On the other hand, the i3DExcavatum ensured a smooth curvature of the surgical prosthesis and was capable of predicting and simulating a virtual shape and size of the bar for asymmetric and symmetric patients. In conclusion, the i3DExcavatum allows preoperative personalization according to the thoracic morphology of each patient. It reduces surgery time and minimizes the margin error introduced by the manually bent bar, which only uses a template that copies the chest wall curvature.
Resumo:
Pectus excavatum is the most common congenital deformity of the anterior thoracic wall. The surgical correction of such deformity, using Nuss procedure, consists in the placement of a personalized convex prosthesis into sub-sternal position to correct the deformity. The aim of this work is the CT-scan substitution by ultrasound imaging for the pre-operative diagnosis and pre-modeling of the prosthesis, in order to avoid patient radiation exposure. To accomplish this, ultrasound images are acquired along an axial plane, followed by a rigid registration method to obtain the spatial transformation between subsequent images. These images are overlapped to reconstruct an axial plane equivalent to a CT-slice. A phantom was used to conduct preliminary experiments and the achieved results were compared with the corresponding CT-data, showing that the proposed methodology can be capable to create a valid approximation of the anterior thoracic wall, which can be used to model/bend the prosthesis
Resumo:
Pectus excavatum is the most common deformity of the thorax. A minimally invasive surgical correction is commonly carried out to remodel the anterior chest wall by using an intrathoracic convex prosthesis in the substernal position. The process of prosthesis modeling and bending still remains an area of improvement. The authors developed a new system, i3DExcavatum, which can automatically model and bend the bar preoperatively based on a thoracic CT scan. This article presents a comparison between automatic and manual bending. The i3DExcavatum was used to personalize prostheses for 41 patients who underwent pectus excavatum surgical correction between 2007 and 2012. Regarding the anatomical variations, the soft-tissue thicknesses external to the ribs show that both symmetric and asymmetric patients always have asymmetric variations, by comparing the patients’ sides. It highlighted that the prosthesis bar should be modeled according to each patient’s rib positions and dimensions. The average differences between the skin and costal line curvature lengths were 84 ± 4 mm and 96 ± 11 mm, for male and female patients, respectively. On the other hand, the i3DExcavatum ensured a smooth curvature of the surgical prosthesis and was capable of predicting and simulating a virtual shape and size of the bar for asymmetric and symmetric patients. In conclusion, the i3DExcavatum allows preoperative personalization according to the thoracic morphology of each patient. It reduces surgery time and minimizes the margin error introduced by the manually bent bar, which only uses a template that copies the chest wall curvature.
Resumo:
Pectus excavatum is the most common deformity of the thorax. A minimally invasive surgical correction is commonly carried out to remodel the anterior chest wall by using an intrathoracic convex prosthesis in the substernal position. The process of prosthesis modeling and bending still remains an area of improvement. The authors developed a new system, i3DExcavatum, which can automatically model and bend the bar preoperatively based on a thoracic CT scan. This article presents a comparison between automatic and manual bending. The i3DExcavatum was used to personalize prostheses for 41 patients who underwent pectus excavatum surgical correction between 2007 and 2012. Regarding the anatomical variations, the soft-tissue thicknesses external to the ribs show that both symmetric and asymmetric patients always have asymmetric variations, by comparing the patients’ sides. It highlighted that the prosthesis bar should be modeled according to each patient’s rib positions and dimensions. The average differences between the skin and costal line curvature lengths were 84 ± 4 mm and 96 ± 11 mm, for male and female patients, respectively. On the other hand, the i3DExcavatum ensured a smooth curvature of the surgical prosthesis and was capable of predicting and simulating a virtual shape and size of the bar for asymmetric and symmetric patients. In conclusion, the i3DExcavatum allows preoperative personalization according to the thoracic morphology of each patient. It reduces surgery time and minimizes the margin error introduced by the manually bent bar, which only uses a template that copies the chest wall curvature.
Resumo:
The aim of this study was to evaluated the efficacy of the Old Way/New Way methodology (Lyndon, 1989/2000) with regard to the permanent correction of a consolidated and automated technical error experienced by a tennis athlete (who is 18 years old and has been engaged in practice mode for about 6 years) in the execution of serves. Additionally, the study assessed the impact of intervention on the athlete’s psychological skills. An individualized intervention was designed using strategies that aimed to produce a) a detailed analysis of the error using video images; b) an increased kinaesthetic awareness; c) a reactivation of memory error; d) the discrimination and generalization of the correct motor action. The athlete’s psychological skills were measured with a Portuguese version of the Psychological Skills Inventory for Sports (Cruz & Viana, 1993). After the intervention, the technical error was corrected with great efficacy and an increase in the athlete’s psychological skills was verified. This study demonstrates the methodology’s efficacy, which is consistent with the effects of this type of intervention in different contexts.
Resumo:
This paper dis cusses the fitting of a Cobb-Doug las response curve Yi = αXβi, with additive error, Yi = αXβi + e i, instead of the usual multiplicative error Yi = αXβi (1 + e i). The estimation of the parameters A and B is discussed. An example is given with use of both types of error.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
Comparison of donor-acceptor electronic couplings calculated within two-state and three-state models suggests that the two-state treatment can provide unreliable estimates of Vda because of neglecting the multistate effects. We show that in most cases accurate values of the electronic coupling in a π stack, where donor and acceptor are separated by a bridging unit, can be obtained as Ṽ da = (E2 - E1) μ12 Rda + (2 E3 - E1 - E2) 2 μ13 μ23 Rda2, where E1, E2, and E3 are adiabatic energies of the ground, charge-transfer, and bridge states, respectively, μij is the transition dipole moments between the states i and j, and Rda is the distance between the planes of donor and acceptor. In this expression based on the generalized Mulliken-Hush approach, the first term corresponds to the coupling derived within a two-state model, whereas the second term is the superexchange correction accounting for the bridge effect. The formula is extended to bridges consisting of several subunits. The influence of the donor-acceptor energy mismatch on the excess charge distribution, adiabatic dipole and transition moments, and electronic couplings is examined. A diagnostic is developed to determine whether the two-state approach can be applied. Based on numerical results, we showed that the superexchange correction considerably improves estimates of the donor-acceptor coupling derived within a two-state approach. In most cases when the two-state scheme fails, the formula gives reliable results which are in good agreement (within 5%) with the data of the three-state generalized Mulliken-Hush model
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.