947 resultados para Residual errors
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Abstract Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used.
Resumo:
Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.
Resumo:
New models for estimating bioaccumulation of persistent organic pollutants in the agricultural food chain were developed using recent improvements to plant uptake and cattle transfer models. One model named AgriSim was based on K OW regressions of bioaccumulation in plants and cattle, while the other was a steady-state mechanistic model, AgriCom. The two developed models and European Union System for the Evaluation of Substances (EUSES), as a benchmark, were applied to four reported food chain (soil/air-grass-cow-milk) scenarios to evaluate the performance of each model simulation against the observed data. The four scenarios considered were as follows: (1) polluted soil and air, (2) polluted soil, (3) highly polluted soil surface and polluted subsurface and (4) polluted soil and air at different mountain elevations. AgriCom reproduced observed milk bioaccumulation well for all four scenarios, as did AgriSim for scenarios 1 and 2, but EUSES only did this for scenario 1. The main causes of the deviation for EUSES and AgriSim were the lack of the soil-air-plant pathway and the ambient air-plant pathway, respectively. Based on the results, it is recommended that soil-air-plant and ambient air-plant pathway should be calculated separately and the K OW regression of transfer factor to milk used in EUSES be avoided. AgriCom satisfied the recommendations that led to the low residual errors between the simulated and the observed bioaccumulation in agricultural food chain for the four scenarios considered. It is therefore recommended that this model should be incorporated into regulatory exposure assessment tools. The model uncertainty of the three models should be noted since the simulated concentration in milk from 5th to 95th percentile of the uncertainty analysis often varied over two orders of magnitude. Using a measured value of soil organic carbon content was effective to reduce this uncertainty by one order of magnitude.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Mistreatment and self-neglect significantly increase the risk of dying in older adults. It is estimated that 1 to 2 million older adults experience elder mistreatment and self-neglect every year in the United States. Currently, there are no elder mistreatment and self-neglect assessment tools with construct validity and measurement invariance testing and no studies have sought to identify underlying latent classes of elder self-neglect that may have differential mortality rates. Using data from 11,280 adults with Texas APS substantiated elder mistreatment and self-neglect 3 studies were conducted to: (1) test the construct validity and (2) the measurement invariance across gender and ethnicity of the Texas Adult Protective Services (APS) Client Assessment and Risk Evaluation (CARE) tool and (3) identify latent classes associated with elder self-neglect. Study 1 confirmed the construct validity of the CARE tool following adjustments to the initial hypothesized CARE tool. This resulted in the deletion of 14 assessment items and a final assessment with 5 original factors and 43 items. Cross-validation for this model was achieved. Study 2 provided empirical evidence for factor loading and item-threshold invariance of the CARE tool across gender and between African-Americans and Caucasians. The financial status domain of the CARE tool did not function properly for Hispanics and thus, had to be deleted. Subsequent analyses showed factor loading and item-threshold invariance across all 3 ethnic groups with the exception of some residual errors. Study 3 identified 4-latent classes associated with elder self-neglect behaviors which included individuals with evidence of problems in the areas of (1) their environment, (2) physical and medical status, (3) multiple domains and (4) finances. Overall, these studies provide evidence supporting the use of APS CARE tool for providing unbiased and valid investigations of mistreatment and neglect in older adults with different demographic characteristics. Furthermore, the findings support the underlying notion that elder self-neglect may not only occur along a continuum, but that differential types may exist. All of which, have very important potential implications for social and health services distributed to vulnerable mistreated and neglected older adults.^
Resumo:
El propósito de esta tesis es la implementación de métodos eficientes de adaptación de mallas basados en ecuaciones adjuntas en el marco de discretizaciones de volúmenes finitos para mallas no estructuradas. La metodología basada en ecuaciones adjuntas optimiza la malla refinándola adecuadamente con el objetivo de mejorar la precisión de cálculo de un funcional de salida dado. El funcional suele ser una magnitud escalar de interés ingenieril obtenida por post-proceso de la solución, como por ejemplo, la resistencia o la sustentación aerodinámica. Usualmente, el método de adaptación adjunta está basado en una estimación a posteriori del error del funcional de salida mediante un promediado del residuo numérico con las variables adjuntas, “Dual Weighted Residual method” (DWR). Estas variables se obtienen de la solución del problema adjunto para el funcional seleccionado. El procedimiento habitual para introducir este método en códigos basados en discretizaciones de volúmenes finitos involucra la utilización de una malla auxiliar embebida obtenida por refinamiento uniforme de la malla inicial. El uso de esta malla implica un aumento significativo de los recursos computacionales (por ejemplo, en casos 3D el aumento de memoria requerida respecto a la que necesita el problema fluido inicial puede llegar a ser de un orden de magnitud). En esta tesis se propone un método alternativo basado en reformular la estimación del error del funcional en una malla auxiliar más basta y utilizar una técnica de estimación del error de truncación, denominada _ -estimation, para estimar los residuos que intervienen en el método DWR. Utilizando esta estimación del error se diseña un algoritmo de adaptación de mallas que conserva los ingredientes básicos de la adaptación adjunta estándar pero con un coste computacional asociado sensiblemente menor. La metodología de adaptación adjunta estándar y la propuesta en la tesis han sido introducidas en un código de volúmenes finitos utilizado habitualmente en la industria aeronáutica Europea. Se ha investigado la influencia de distintos parámetros numéricos que intervienen en el algoritmo. Finalmente, el método propuesto se compara con otras metodologías de adaptación de mallas y su eficiencia computacional se demuestra en una serie de casos representativos de interés aeronáutico. ABSTRACT The purpose of this thesis is the implementation of efficient grid adaptation methods based on the adjoint equations within the framework of finite volume methods (FVM) for unstructured grid solvers. The adjoint-based methodology aims at adapting grids to improve the accuracy of a functional output of interest, as for example, the aerodynamic drag or lift. The adjoint methodology is based on the a posteriori functional error estimation using the adjoint/dual-weighted residual method (DWR). In this method the error in a functional output can be directly related to local residual errors of the primal solution through the adjoint variables. These variables are obtained by solving the corresponding adjoint problem for the chosen functional. The common approach to introduce the DWR method within the FVM framework involves the use of an auxiliary embedded grid. The storage of this mesh demands high computational resources, i.e. over one order of magnitude increase in memory relative to the initial problem for 3D cases. In this thesis, an alternative methodology for adapting the grid is proposed. Specifically, the DWR approach for error estimation is re-formulated on a coarser mesh level using the _ -estimation method to approximate the truncation error. Then, an output-based adaptive algorithm is designed in such way that the basic ingredients of the standard adjoint method are retained but the computational cost is significantly reduced. The standard and the new proposed adjoint-based adaptive methodologies have been incorporated into a flow solver commonly used in the EU aeronautical industry. The influence of different numerical settings has been investigated. The proposed method has been compared against different grid adaptation approaches and the computational efficiency of the new method has been demonstrated on some representative aeronautical test cases.
Resumo:
A technique is presented for the development of a high precision and resolution Mean Sea Surface (MSS) model. The model utilises Radar altimetric sea surface heights extracted from the geodetic phase of the ESA ERS-1 mission. The methodology uses a modified Le Traon et al. (1995) cubic-spline fit of dual ERS-1 and TOPEX/Poseidon crossovers for the minimisation of radial orbit error. The procedure then uses Fourier domain processing techniques for spectral optimal interpolation of the mean sea surface in order to reduce residual errors within the model. Additionally, a multi-satellite mean sea surface integration technique is investigated to supplement the first model with additional enhanced data from the GEOSAT geodetic mission.The methodology employs a novel technique that combines the Stokes' and Vening-Meinsz' transformations, again in the spectral domain. This allows the presentation of a new enhanced GEOSAT gravity anomaly field.
Resumo:
Spectral albedo has been measured at Dome C since December 2012 in the visible and near infrared (400 - 1050 nm) at sub-hourly resolution using a home-made spectral radiometer. Superficial specific surface area (SSA) has been estimated by fitting the observed albedo spectra to the analytical Asymptotic Approximation Radiative Transfer theory (AART). The dataset includes fully-calibrated albedo and SSA that pass several quality checks as described in the companion article. Only data for solar zenith angles less than 75° have been included, which theoretically spans the period October-March. In addition, to correct for residual errors still affecting data after the calibration, especially at the solar zenith angles higher than 60°, we produced a higher quality albedo time-series as follows: In the SSA estimation process described in the companion paper, a scaling coefficient A between the observed albedo and the theoretical model predictions was introduced to cope with these errors. This coefficient thus provides a first order estimate of the residual error. By dividing the albedo by this coefficient, we produced the "scaled fully-calibrated albedo". We strongly recommend to use the latter for most applications because it generally remains in the physical range 0-1. The former albedo is provided for reference to the companion paper and because it does not depend on the SSA estimation process and its underlying assumptions.
Resumo:
This paper examines why practitioners and researchers get different estimates of equity value when they use a discounted cash flow (CF) model versus a residual income (RI) model. Both models are derived from the same underlying assumption -- that price is the present value of expected future net dividends discounted at the cost of equity capital -- but in practice and in research they frequently yield different estimates. We argue that the research literature devoted to comparing the accuracy of these two models is misguided; properly implemented, both models yield identical valuations for all firms in all years. We identify how prior research has applied inconsistent assumptions to the two models and show how these seemingly small errors cause surprisingly large differences in the value estimates. [ABSTRACT FROM AUTHOR]
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
The design of control, estimation or diagnosis algorithms most often assumes that all available process variables represent the system state at the same instant of time. However, this is never true in current network systems, because of the unknown deterministic or stochastic transmission delays introduced by the communication network. During the diagnosing stage, this will often generate false alarms. Under nominal operation, the different transmission delays associated with the variables that appear in the computation form produce discrepancies of the residuals from zero. A technique aiming at the minimisation of the resulting false alarms rate, that is based on the explicit modelling of communication delays and on their best-case estimation is proposed