946 resultados para Mean square analysis
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
The aim of this study was to determine the most informative sampling time(s) providing a precise prediction of tacrolimus area under the concentration-time curve (AUC). Fifty-four concentration-time profiles of tacrolimus from 31 adult liver transplant recipients were analyzed. Each profile contained 5 tacrolimus whole-blood concentrations (predose and 1, 2, 4, and 6 or 8 hours postdose), measured using liquid chromatography-tandem mass spectrometry. The concentration at 6 hours was interpolated for each profile, and 54 values of AUC(0-6) were calculated using the trapezoidal rule. The best sampling times were then determined using limited sampling strategies and sensitivity analysis. Linear mixed-effects modeling was performed to estimate regression coefficients of equations incorporating each concentration-time point (C0, C1, C2, C4, interpolated C5, and interpolated C6) as a predictor of AUC(0-6). Predictive performance was evaluated by assessment of the mean error (ME) and root mean square error (RMSE). Limited sampling strategy (LSS) equations with C2, C4, and C5 provided similar results for prediction of AUC(0-6) (R-2 = 0.869, 0.844, and 0.832, respectively). These 3 time points were superior to C0 in the prediction of AUC. The ME was similar for all time points; the RMSE was smallest for C2, C4, and C5. The highest sensitivity index was determined to be 4.9 hours postdose at steady state, suggesting that this time point provides the most information about the AUC(0-12). The results from limited sampling strategies and sensitivity analysis supported the use of a single blood sample at 5 hours postdose as a predictor of both AUC(0-6) and AUC(0-12). A jackknife procedure was used to evaluate the predictive performance of the model, and this demonstrated that collecting a sample at 5 hours after dosing could be considered as the optimal sampling time for predicting AUC(0-6).
Resumo:
This paper gives a review of recent progress in the design of numerical methods for computing the trajectories (sample paths) of solutions to stochastic differential equations. We give a brief survey of the area focusing on a number of application areas where approximations to strong solutions are important, with a particular focus on computational biology applications, and give the necessary analytical tools for understanding some of the important concepts associated with stochastic processes. We present the stochastic Taylor series expansion as the fundamental mechanism for constructing effective numerical methods, give general results that relate local and global order of convergence and mention the Magnus expansion as a mechanism for designing methods that preserve the underlying structure of the problem. We also present various classes of explicit and implicit methods for strong solutions, based on the underlying structure of the problem. Finally, we discuss implementation issues relating to maintaining the Brownian path, efficient simulation of stochastic integrals and variable-step-size implementations based on various types of control.
Resumo:
Background: Lean bodyweight (LBW) has been recommended for scaling drug doses. However, the current methods for predicting LBW are inconsistent at extremes of size and could be misleading with respect to interpreting weight-based regimens. Objective: The objective of the present study was to develop a semi-mechanistic model to predict fat-free mass (FFM) from subject characteristics in a population that includes extremes of size. FFM is considered to closely approximate LBW. There are several reference methods for assessing FFM, whereas there are no reference standards for LBW. Patients and methods: A total of 373 patients (168 male, 205 female) were included in the study. These data arose from two populations. Population A (index dataset) contained anthropometric characteristics, FFM estimated by dual-energy x-ray absorptiometry (DXA - a reference method) and bioelectrical impedance analysis (BIA) data. Population B (test dataset) contained the same anthropometric measures and FFM data as population A, but excluded BIA data. The patients in population A had a wide range of age (18-82 years), bodyweight (40.7-216.5kg) and BMI values (17.1-69.9 kg/m(2)). Patients in population B had BMI values of 18.7-38.4 kg/m(2). A two-stage semi-mechanistic model to predict FFM was developed from the demographics from population A. For stage 1 a model was developed to predict impedance and for stage 2 a model that incorporated predicted impedance was used to predict FFM. These two models were combined to provide an overall model to predict FFM from patient characteristics. The developed model for FFM was externally evaluated by predicting into population B. Results: The semi-mechanistic model to predict impedance incorporated sex, height and bodyweight. The developed model provides a good predictor of impedance for both males and females (r(2) = 0.78, mean error [ME] = 2.30 x 10(-3), root mean square error [RMSE] = 51.56 [approximately 10% of mean]). The final model for FFM incorporated sex, height and bodyweight. The developed model for FFM provided good predictive performance for both males and females (r(2) = 0.93, ME = -0.77, RMSE = 3.33 [approximately 6% of mean]). In addition, the model accurately predicted the FFM of subjects in population B (r(2) = 0.85, ME -0.04, RMSE = 4.39 [approximately 7% of mean]). Conclusions: A semi-mechanistic model has been developed to predict FFM (and therefore LBW) from easily accessible patient characteristics. This model has been prospectively evaluated and shown to have good predictive performance.
Resumo:
Recently the Balanced method was introduced as a class of quasi-implicit methods for solving stiff stochastic differential equations. We examine asymptotic and mean-square stability for several implementations of the Balanced method and give a generalized result for the mean-square stability region of any Balanced method. We also investigate the optimal implementation of the Balanced method with respect to strong convergence.
Resumo:
An investigator may also wish to select a small subset of the X variables which give the best prediction of the Y variable. In this case, the question is how many variables should the regression equation include? One method would be to calculate the regression of Y on every subset of the X variables and choose the subset that gives the smallest mean square deviation from the regression. Most investigators, however, prefer to use a ‘stepwise multiple regression’ procedure. There are two forms of this analysis called the ‘step-up’ (or ‘forward’) method and the ‘step-down’ (or ‘backward’) method. This Statnote illustrates the use of stepwise multiple regression with reference to the scenario introduced in Statnote 24, viz., the influence of climatic variables on the growth of the crustose lichen Rhizocarpon geographicum (L.)DC.
Resumo:
The internal optics of the recent models of the Shin-Nippon SRW-5000 autorefractor (also marketed as the Grand Seiko WV-500) have been modified by the manufacturer so that the infrared measurement ring has been replaced by pairs of horizontal and vertical infrared bars, on either side of fixation. The binocular, open field-of-view, allowing the accommodative state to be objectively monitored while a natural environment is viewed, has made the SRW-5000 a valuable tool in further understanding the nature of the oculomotor response. It is shown that the root-mean-square of model eye measures was least (0.017 ± 0.002D) when the separation of the horizontal measurement bars were averaged twice. The separation of the horizontal bars changes by 3.59 pixels/dioptre (r2 = 0.99), allowing continuous on-line analysis of the refractive state at up to 60 Hz temporal resolution to an accuracy of <0.001D, with pupils >3 mm. The pupil edge is not obscured in the diagonal axis by the measurement bars, unlike the ring of the original optics, so in the newer model pupil size can be measured simultaneously at the same rate with a resolution of <0.001 mm. The measurements of accommodation and pupil size are relatively unaffected by eccentricity of viewing up to ±10° from the visual axis and instrument focusing inaccuracies over a range of 10 mm towards the eye and 5 mm away from the eye. The resolution and temporal properties of the analysis are therefore ideal for the simultaneous measurement of dynamic accommodation and pupil responses. © 2004 The College of Optometrists.
Resumo:
The impact of whole body vibrations (vibration stimulus mechanically transferred to the body) on muscular activity and neuromuscular response has been widely studied but without standard protocol and by using different kinds of exercises and parameters. In this study, we investigated how whole body vibration treatments affect electromyographic signal of rectus femoris during static and dynamic squat exercises. The aim was the identification of squat exercise characteristics useful to maximize neuromuscular activation and hence progress in training efficacy. Fourteen healthy volunteers performed both static and dynamic squat exercises without and with vibration treatments. Surface electromyographic signals of rectus femoris were recorded during the whole exercise and processed to reduce artifacts and to extract root mean square values. Paired t-test results demonstrated an increase of the root mean square values (p<0.05) in both static and dynamic squat exercises with vibrations respectively of 63% and 108%. For each exercise, subjects gave a rating of the perceived exertion according to the Borg's scale but there were no significant changes in the perceived exertion rate between exercises with and without vibration. Finally, results from analysis of electromyographic signals identified the static squat with WBV treatment as the exercise with higher neuromuscular system response. © 2012 IEEE.
Resumo:
OBJECTIVE: To analyze differences in the variables associated with severity of suicidal intent and in the main factors associated with intent when comparing younger and older adults. DESIGN: Observational, descriptive cross-sectional study. SETTING: Four general hospitals in Madrid, Spain. PARTICIPANTS: Eight hundred seventy suicide attempts by 793 subjects split into two groups: 18-54 year olds and subjects older than 55 years. MEASUREMENTS: The authors tested the factorial latent structure of suicidal intent through multigroup confirmatory factor analysis for categorical outcomes and performed statistical tests of invariance across age groups using the DIFFTEST procedure. Then, they tested a multiple indicators-multiple causes (MIMIC) model including different covariates regressed on the latent factor "intent" and performed two separate MIMIC models for younger and older adults to test for differential patterns. RESULTS: Older adults had higher suicidal intent than younger adults (z = 2.63, p = 0.009). The final model for the whole sample showed a relationship of intent with previous attempts, support, mood disorder, personality disorder, substance-related disorder, and schizophrenia and other psychotic disorders. The model showed an adequate fit (chi²[12] = 22.23, p = 0.035; comparative fit index = 0.986; Tucker-Lewis index = 0.980; root mean square error of approximation = 0.031; weighted root mean square residual = 0.727). All covariates had significant weights in the younger group, but in the older group, only previous attempts and mood disorders were significantly related to intent severity. CONCLUSIONS: The pattern of variables associated with suicidal intent varies with age. Recognition, and treatment of geriatric depression may be the most effective measure to prevent suicidal behavior in older adults.
Resumo:
In this study, an Atomic Force Microscopy (AFM) roughness analysis was performed on non-commercial Nitinol alloys with Electropolished (EP) and Magneto-Electropolished (MEP) surface treatments and commercially available stents by measuring Root-Mean-Square (RMS) , Average Roughness (Ra), and Surface Area (SA) values at various dimensional areas on the alloy surfaces, ranging from (800 x 800 nm) to (115 x 115µm), and (800 x 800 nm) to (40 x 40 µm) on the commercial stents. Results showed that NiTi-Ta 10 wt% with an EP surface treatment yielded the highest overall roughness, while the NiTi-Cu 10 wt% alloy had the lowest roughness when analyzed over (115 x 115 µm). Scanning Electron Microscopy (SEM) and Energy Dispersive Spectroscopy (EDS) analysis revealed unique surface morphologies for surface treated alloys, as well as an aggregation of ternary elements Cr and Cu at grain boundaries in MEP and EP surface treated alloys, and non-surface treated alloys. Such surface micro-patterning on ternary Nitinol alloys could increase cellular adhesion and accelerate surface endothelialization of endovascular stents, thus reducing the likelihood of in-stent restenosis and provide insight into hemodynamic flow regimes and the corrosion behavior of an implantable device influenced from such surface micro-patterns.
Resumo:
In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency's safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.
Resumo:
Based on the quantitative analysis of diatom assemblages preserved in 274 surface sediment samples recovered in the Pacific, Atlantic and western Indian sectors of the Southern Ocean we have defined a new reference database for quantitative estimation of late-middle Pleistocene Antarctic sea ice fields using the transfer function technique. The Detrended Canonical Analysis (DCA) of the diatom data set points to a unimodal distribution of the diatom assemblages. Canonical Correspondence Analysis (CCA) indicates that winter sea ice (WSI) but also summer sea surface temperature (SSST) represent the most prominent environmental variables that control the spatial species distribution. To test the applicability of transfer functions for sea ice reconstruction in terms of concentration and occurrence probability we applied four different methods, the Imbrie and Kipp Method (IKM), the Modern Analog Technique (MAT), Weighted Averaging (WA), and Weighted Averaging Partial Least Squares (WAPLS), using logarithm-transformed diatom data and satellite-derived (1981-2010) sea ice data as a reference. The best performance for IKM results was obtained using a subset of 172 samples with 28 diatom taxa/taxa groups, quadratic regression and a three-factor model (IKM-D172/28/3q) resulting in root mean square errors of prediction (RMSEP) of 7.27% and 11.4% for WSI and summer sea ice (SSI) concentration, respectively. MAT estimates were calculated with different numbers of analogs (4, 6) using a 274-sample/28-taxa reference data set (MAT-D274/28/4an, -6an) resulting in RMSEP's ranging from 5.52% (4an) to 5.91% (6an) for WSI as well as 8.93% (4an) to 9.05% (6an) for SSI. WA and WAPLS performed less well with the D274 data set, compared to MAT, achieving WSI concentration RMSEP's of 9.91% with WA and 11.29% with WAPLS, recommending the use of IKM and MAT. The application of IKM and MAT to surface sediment data revealed strong relations to the satellite-derived winter and summer sea ice field. Sea ice reconstructions performed on an Atlantic- and a Pacific Southern Ocean sediment core, both documenting sea ice variability over the past 150,000 years (MIS 1 - MIS 6), resulted in similar glacial/interglacial trends of IKM and MAT-based sea-ice estimates. On the average, however, IKM estimates display smaller WSI and slightly higher SSI concentration and probability at lower variability in comparison with MAT. This pattern is a result of different estimation techniques with integration of WSI and SSI signals in one single factor assemblage by applying IKM and selecting specific single samples, thus keeping close to the original diatom database and included variability, by MAT. In contrast to the estimation of WSI, reconstructions of past SSI variability remains weaker. Combined with diatom-based estimates, the abundance and flux pattern of biogenic opal represents an additional indication for the WSI and SSI extent.
Resumo:
The Indian winter monsoon (IWM) is a key component of the seasonally changing monsoon system that affects the densely populated regions of South Asia. Cold winds originating in high northern latitudes provide a link of continental-scale Northern Hemisphere climate to the tropics. Western Disturbances (WD) associated with the IWM play a critical role for the climate and hydrology in northern India and the western Himalaya region. It is vital to understand the mechanisms and teleconnections that influence IWM variability to better predict changes in future climate. Here we present a study of regionally calibrated winter (January) temperatures and according IWM intensities, based on a planktic foraminiferal record with biennial (2.55 years) resolution. Over the last ~250 years, IWM intensities gradually weakened, based on the long-term trend of reconstructed January temperatures. Furthermore, the results indicate that IWM is connected on interannual- to decadal time scales to climate variability of the tropical and extratropical Pacific, via El Niño Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO). However, our findings suggest that this relationship appeared to begin to decouple since the beginning of the 20th century. Cross-spectral analysis revealed that several distinct decadal-scale phases of colder climate and accordingly more intense winter monsoon centered at the years ~1800, ~1890 and ~1930 can be linked to changes of the North Atlantic Oscillation (NAO).
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Résumé : En raison de sa grande étendue, le Nord canadien présente plusieurs défis logistiques pour une exploitation rentable de ses ressources minérales. La TéléCartographie Prédictive (TCP) vise à faciliter la localisation de gisements en produisant des cartes du potentiel géologique. Des données altimétriques sont nécessaires pour générer ces cartes. Or, celles actuellement disponibles au nord du 60e parallèle ne sont pas optimales principalement parce qu’elles sont dérivés de courbes à équidistance variable et avec une valeur au mètre. Parallèlement, il est essentiel de connaître l'exactitude verticale des données altimétriques pour être en mesure de les utiliser adéquatement, en considérant les contraintes liées à son exactitude. Le projet présenté vise à aborder ces deux problématiques afin d'améliorer la qualité des données altimétriques et contribuer à raffiner la cartographie prédictive réalisée par TCP dans le Nord canadien, pour une zone d’étude située au Territoire du Nord-Ouest. Le premier objectif était de produire des points de contrôles permettant une évaluation précise de l'exactitude verticale des données altimétriques. Le second objectif était de produire un modèle altimétrique amélioré pour la zone d'étude. Le mémoire présente d'abord une méthode de filtrage pour des données Global Land and Surface Altimetry Data (GLA14) de la mission ICESat (Ice, Cloud and land Elevation SATellite). Le filtrage est basé sur l'application d'une série d'indicateurs calculés à partir d’informations disponibles dans les données GLA14 et des conditions du terrain. Ces indicateurs permettent d'éliminer les points d'élévation potentiellement contaminés. Les points sont donc filtrés en fonction de la qualité de l’attitude calculée, de la saturation du signal, du bruit d'équipement, des conditions atmosphériques, de la pente et du nombre d'échos. Ensuite, le document décrit une méthode de production de Modèles Numériques de Surfaces (MNS) améliorés, par stéréoradargrammétrie (SRG) avec Radarsat-2 (RS-2). La première partie de la méthodologie adoptée consiste à faire la stéréorestitution des MNS à partir de paires d'images RS-2, sans point de contrôle. L'exactitude des MNS préliminaires ainsi produits est calculée à partir des points de contrôles issus du filtrage des données GLA14 et analysée en fonction des combinaisons d’angles d'incidences utilisées pour la stéréorestitution. Ensuite, des sélections de MNS préliminaires sont assemblées afin de produire 5 MNS couvrant chacun la zone d'étude en totalité. Ces MNS sont analysés afin d'identifier la sélection optimale pour la zone d'intérêt. Les indicateurs sélectionnés pour la méthode de filtrage ont pu être validés comme performant et complémentaires, à l’exception de l’indicateur basé sur le ratio signal/bruit puisqu’il était redondant avec l’indicateur basé sur le gain. Autrement, chaque indicateur a permis de filtrer des points de manière exclusive. La méthode de filtrage a permis de réduire de 19% l'erreur quadratique moyenne sur l'élévation, lorsque que comparée aux Données d'Élévation Numérique du Canada (DNEC). Malgré un taux de rejet de 69% suite au filtrage, la densité initiale des données GLA14 a permis de conserver une distribution spatiale homogène. À partir des 136 MNS préliminaires analysés, aucune combinaison d’angles d’incidences des images RS-2 acquises n’a pu être identifiée comme étant idéale pour la SRG, en raison de la grande variabilité des exactitudes verticales. Par contre, l'analyse a indiqué que les images devraient idéalement être acquises à des températures en dessous de 0°C, pour minimiser les disparités radiométriques entre les scènes. Les résultats ont aussi confirmé que la pente est le principal facteur d’influence sur l’exactitude de MNS produits par SRG. La meilleure exactitude verticale, soit 4 m, a été atteinte par l’assemblage de configurations de même direction de visées. Par contre, les configurations de visées opposées, en plus de produire une exactitude du même ordre (5 m), ont permis de réduire le nombre d’images utilisées de 30%, par rapport au nombre d'images acquises initialement. Par conséquent, l'utilisation d'images de visées opposées pourrait permettre d’augmenter l’efficacité de réalisation de projets de SRG en diminuant la période d’acquisition. Les données altimétriques produites pourraient à leur tour contribuer à améliorer les résultats de la TCP, et augmenter la performance de l’industrie minière canadienne et finalement, améliorer la qualité de vie des citoyens du Nord du Canada.