13 resultados para EQUATION-ERROR MODELS
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
To enhance understanding of the metabolic indicators of type 2 diabetes mellitus (T2DM) disease pathogenesis and progression, the urinary metabolomes of well characterized rhesus macaques (normal or spontaneously and naturally diabetic) were examined. High-resolution ultra-performance liquid chromatography coupled with the accurate mass determination of time-of-flight mass spectrometry was used to analyze spot urine samples from normal (n = 10) and T2DM (n = 11) male monkeys. The machine-learning algorithm random forests classified urine samples as either from normal or T2DM monkeys. The metabolites important for developing the classifier were further examined for their biological significance. Random forests models had a misclassification error of less than 5%. Metabolites were identified based on accurate masses (<10 ppm) and confirmed by tandem mass spectrometry of authentic compounds. Urinary compounds significantly increased (p < 0.05) in the T2DM when compared with the normal group included glycine betaine (9-fold), citric acid (2.8-fold), kynurenic acid (1.8-fold), glucose (68-fold), and pipecolic acid (6.5-fold). When compared with the conventional definition of T2DM, the metabolites were also useful in defining the T2DM condition, and the urinary elevations in glycine betaine and pipecolic acid (as well as proline) indicated defective re-absorption in the kidney proximal tubules by SLC6A20, a Na(+)-dependent transporter. The mRNA levels of SLC6A20 were significantly reduced in the kidneys of monkeys with T2DM. These observations were validated in the db/db mouse model of T2DM. This study provides convincing evidence of the power of metabolomics for identifying functional changes at many levels in the omics pipeline.
Resumo:
Background: Accelerometry has been established as an objective method that can be used to assess physical activity behavior in large groups. The purpose of the current study was to provide a validated equation to translate accelerometer counts of the triaxial GT3X into energy expenditure in young children. Methods: Thirty-two children aged 5–9 years performed locomotor and play activities that are typical for their age group. Children wore a GT3X accelerometer and their energy expenditure was measured with indirect calorimetry. Twenty-one children were randomly selected to serve as development group. A cubic 2-regression model involving separate equations for locomotor and play activities was developed on the basis of model fit. It was then validated using data of the remaining children and compared with a linear 2-regression model and a linear 1-regression model. Results: All 3 regression models produced strong correlations between predicted and measured MET values. Agreement was acceptable for the cubic model and good for both linear regression approaches. Conclusions: The current linear 1-regression model provides valid estimates of energy expenditure for ActiGraph GT3X data for 5- to 9-year-old children and shows equal or better predictive validity than a cubic or a linear 2-regression model.
Resumo:
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.
Resumo:
This paper introduces and analyzes a stochastic search method for parameter estimation in linear regression models in the spirit of Beran and Millar [Ann. Statist. 15(3) (1987) 1131–1154]. The idea is to generate a random finite subset of a parameter space which will automatically contain points which are very close to an unknown true parameter. The motivation for this procedure comes from recent work of Dümbgen et al. [Ann. Statist. 39(2) (2011) 702–730] on regression models with log-concave error distributions.
Resumo:
In the last century, several mathematical models have been developed to calculate blood ethanol concentrations (BAC) from the amount of ingested ethanol and vice versa. The most common one in the field of forensic sciences is Widmark's equation. A drinking experiment with 10 voluntary test persons was performed with a target BAC of 1.2 g/kg estimated using Widmark's equation as well as Watson's factor. The ethanol concentrations in the blood were measured using headspace gas chromatography/flame ionization and additionally with an alcohol Dehydrogenase (ADH)-based method. In a healthy 75-year-old man a distinct discrepancy between the intended and the determined blood ethanol concentration was observed. A blood ethanol concentration of 1.83 g/kg was measured and the man showed signs of intoxication. A possible explanation for the discrepancy is a reduction of the total body water content in older people. The incident showed that caution is advised when using the different mathematical models in aged people. When estimating ethanol concentrations, caution is recommended with calculated results due to potential discrepancies between mathematical models and biological systems
Resumo:
Within the context of exoplanetary atmospheres, we present a comprehensive linear analysis of forced, damped, magnetized shallow water systems, exploring the effects of dimensionality, geometry (Cartesian, pseudo-spherical, and spherical), rotation, magnetic tension, and hydrodynamic and magnetic sources of friction. Across a broad range of conditions, we find that the key governing equation for atmospheres and quantum harmonic oscillators are identical, even when forcing (stellar irradiation), sources of friction (molecular viscosity, Rayleigh drag, and magnetic drag), and magnetic tension are included. The global atmospheric structure is largely controlled by a single key parameter that involves the Rossby and Prandtl numbers. This near-universality breaks down when either molecular viscosity or magnetic drag acts non-uniformly across latitude or a poloidal magnetic field is present, suggesting that these effects will introduce qualitative changes to the familiar chevron-shaped feature witnessed in simulations of atmospheric circulation. We also find that hydrodynamic and magnetic sources of friction have dissimilar phase signatures and affect the flow in fundamentally different ways, implying that using Rayleigh drag to mimic magnetic drag is inaccurate. We exhaustively lay down the theoretical formalism (dispersion relations, governing equations, and time-dependent wave solutions) for a broad suite of models. In all situations, we derive the steady state of an atmosphere, which is relevant to interpreting infrared phase and eclipse maps of exoplanetary atmospheres. We elucidate a pinching effect that confines the atmospheric structure to be near the equator. Our suite of analytical models may be used to develop decisively physical intuition and as a reference point for three-dimensional magnetohydrodynamic simulations of atmospheric circulation.
Resumo:
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.
Resumo:
The diversity of European culture is reflected in its healthcare training programs. In intensive care medicine (ICM), the differences in national training programs were so marked that it was unlikely that they could produce specialists of equivalent skills. The Competency-Based Training in Intensive Care Medicine in Europe (CoBaTrICE) program was established in 2003 as a Europe-based worldwide collaboration of national training organizations to create core competencies for ICM using consensus methodologies to establish common ground. The group's professional and research ethos created a social identity that facilitated change. The program was easily adaptable to different training structures and incorporated the voice of patients and relatives. The CoBaTrICE program has now been adopted by 15 European countries, with another 12 countries planning to adopt the training program, and is currently available in nine languages, including English. ICM is now recognized as a primary specialty in Spain, Switzerland, and the UK. There are still wide variations in structures and processes of training in ICM across Europe, although there has been agreement on a set of common program standards. The combination of a common "product specification" for an intensivist, combined with persisting variation in the educational context in which competencies are delivered, provides a rich source of research inquiry. Pedagogic research in ICM could usefully focus on the interplay between educational interventions, healthcare systems and delivery, and patient outcomes, such as including whether competency-based program are associated with lower error rates, whether communication skills training is associated with greater patient and family satisfaction, how multisource feedback might best be used to improve reflective learning and teamworking, or whether increasing the proportion of specialists trained in acute care in the hospital at weekends results in better patient outcomes.
Resumo:
Accurate three-dimensional (3D) models of lumbar vertebrae are required for image-based 3D kinematics analysis. MRI or CT datasets are frequently used to derive 3D models but have the disadvantages that they are expensive, time-consuming or involving ionizing radiation (e.g., CT acquisition). In this chapter, we present an alternative technique that can reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image and a statistical shape model. Cadaveric studies are conducted to verify the reconstruction accuracy by comparing the surface models reconstructed from a single lateral fluoroscopic image to the ground truth data from 3D CT segmentation. A mean reconstruction error between 0.7 and 1.4 mm was found.
Resumo:
We calculate the all-loop anomalous dimensions of current operators in λ-deformed σ-models. For the isotropic integrable deformation and for a semi-simple group G we compute the anomalous dimensions using two different methods. In the first we use the all-loop effective action and in the second we employ perturbation theory along with the Callan–Symanzik equation and in conjunction with a duality-type symmetry shared by these models. Furthermore, using CFT techniques we compute the all-loop anomalous dimension of bilinear currents for the isotropic deformation case and a general G . Finally we work out the anomalous dimension matrix for the cases of anisotropic SU(2) and the two couplings, corresponding to the symmetric coset G/H and a subgroup H, splitting of a group G.
Resumo:
OBJECTIVES
To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data.
METHODS
Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses.
RESULTS
There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D<0.17 mm), as expected, followed by AC and BZ superimpositions that presented similar level of accuracy (D<0.5 mm). 3P and 1Z were the least accurate superimpositions (0.79
Resumo:
Chironomid-temperature inference models based on North American, European and combined surface sediment training sets were compared to assess the overall reliability of their predictions. Between 67 and 76 of the major chironomid taxa in each data set showed a unimodal response to July temperature, whereas between 5 and 22 of the common taxa showed a sigmoidal response. July temperature optima were highly correlated among the training sets, but the correlations for other taxon parameters such as tolerances and weighted averaging partial least squares (WA-PLS) and partial least squares (PLS) regression coefficients were much weaker. PLS, weighted averaging, WA-PLS, and the Modern Analogue Technique, all provided useful and reliable temperature inferences. Although jack-knifed error statistics suggested that two-component WA-PLS models had the highest predictive power, intercontinental tests suggested that other inference models performed better. The various models were able to provide good July temperature inferences, even where neither good nor close modern analogues for the fossil chironomid assemblages existed. When the models were applied to fossil Lateglacial assemblages from North America and Europe, the inferred rates and magnitude of July temperature changes varied among models. All models, however, revealed similar patterns of Lateglacial temperature change. Depending on the model used, the inferred Younger Dryas July temperature decrease ranged between 2.5 and 6°C.