917 resultados para structuration of lexical data bases
Resumo:
In Part One, the foundations of Bayesian inference are reviewed, and the technicalities of the Bayesian method are illustrated. Part Two applies the Bayesian meta-analysis program, the Confidence Profile Method (CPM), to clinical trial data and evaluates the merits of using Bayesian meta-analysis for overviews of clinical trials.^ The Bayesian method of meta-analysis produced similar results to the classical results because of the large sample size, along with the input of a non-preferential prior probability distribution. These results were anticipated through explanations in Part One of the mechanics of the Bayesian approach. ^
Resumo:
The primary objective of this study was to determine if there is a change in permeation rates when limited use protective fabrics undergo repeated exposure and wash cycles. The null hypothesis of this study was that no substantial change in permeation takes place after the test material is subjected to repeated contact with a strong acid or base and has undergone repeated wash cycles. ^ The materials tested were DuPont Tychem® CPF 3 and CPF 4 fabrics. The challenge chemicals in this study were ninety-eight percent sulfuric acid and fifty percent sodium hydroxide. Permeation testing was conducted utilizing ASTM designation F739-99a Standard Test Method for Resistance of Protective Clothing Materials to Permeation by Liquids or Gases Under Conditions of Continuous Contact. ^ In this study, no change in permeation rates of either challenge chemical was detected for CPF 3 or CPF 4 limited use protective fabrics after repeated exposure and wash cycles. Certain unexposed areas of the fabric suffered structural degradation unrelated to exposure and which may be due to multiple washings.^
Resumo:
With most clinical trials, missing data presents a statistical problem in evaluating a treatment's efficacy. There are many methods commonly used to assess missing data; however, these methods leave room for bias to enter the study. This thesis was a secondary analysis on data taken from TIME, a phase 2 randomized clinical trial conducted to evaluate the safety and effect of the administration timing of bone marrow mononuclear cells (BMMNC) for subjects with acute myocardial infarction (AMI).^ We evaluated the effect of missing data by comparing the variance inflation factor (VIF) of the effect of therapy between all subjects and only subjects with complete data. Through the general linear model, an unbiased solution was made for the VIF of the treatment's efficacy using the weighted least squares method to incorporate missing data. Two groups were identified from the TIME data: 1) all subjects and 2) subjects with complete data (baseline and follow-up measurements). After the general solution was found for the VIF, it was migrated Excel 2010 to evaluate data from TIME. The resulting numerical value from the two groups was compared to assess the effect of missing data.^ The VIF values from the TIME study were considerably less in the group with missing data. By design, we varied the correlation factor in order to evaluate the VIFs of both groups. As the correlation factor increased, the VIF values increased at a faster rate in the group with only complete data. Furthermore, while varying the correlation factor, the number of subjects with missing data was also varied to see how missing data affects the VIF. When subjects with only baseline data was increased, we saw a significant rate increase in VIF values in the group with only complete data while the group with missing data saw a steady and consistent increase in the VIF. The same was seen when we varied the group with follow-up only data. This essentially showed that the VIFs steadily increased when missing data is not ignored. When missing data is ignored as with our comparison group, the VIF values sharply increase as correlation increases.^
Resumo:
Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^
Resumo:
The climate of Marine Isotope Stage (MIS) 11, the interglacial roughly 400,000 years ago, is investigated for four time slices, 416, 410, 400, and 394 ka. The overall picture is that MIS 11 was a relatively warm interglacial in comparison to preindustrial, with Northern Hemisphere (NH) summer temperatures early in MIS 11 (416-410 ka) warmer than preindustrial, though winters were cooler. Later in MIS 11, especially around 400 ka, conditions were cooler in the NH summer, mainly in the high latitudes. Climate changes simulated by the models were mainly driven by insolation changes, with the exception of two local feedbacks that amplify climate changes. Here, the NH high latitudes, where reductions in sea ice cover lead to a winter warming early in MIS 11, as well as the tropics, where monsoon changes lead to stronger climate variations than one would expect on the basis of latitudinal mean insolation change alone, are especially prominent. The results support a northward expansion of trees at the expense of grasses in the high northern latitudes early during MIS 11, especially in northern Asia and North America.
Long-term time-series of mesozooplankton biomass in the Gulf of Naples, data for the years 2005-2006
Resumo:
The Weddell Gyre plays a crucial role in the regulation of climate by transferring heat into the deep ocean through deep and bottom water mass formation. However, our understanding of Weddell Gyre water mass properties is limited to regions of data availability, primarily along the Prime Meridian. The aim is to provide a dataset of the upper water column properties of the entire Weddell Gyre. Objective mapping was applied to Argo float data in order to produce spatially gridded, time composite maps of temperature and salinity for fixed pressure levels ranging from 50 to 2000 dbar, as well as temperature, salinity and pressure at the level of the sub-surface temperature maximum. While the data are currently too limited to incorporate time into the gridded structure, the data are extensive enough to produce maps of the entire region across three time composite periods (2002-2005, 2006-2009 and 2010-2013), which can be used to determine how representative conclusions drawn from data collected along general RV transect lines are on a gyre scale perspective. The time composite data sets are provided as netCDF files; one for each time period. Mapped fields of conservative temperature, absolute salinity and potential density are provided for 41 vertical pressure levels. The above variables as well as pressure are provided at the level of the sub-surface temperature maximum. Corresponding mapping errors are also included in the netCDF files. Further details are provided in the global attributes, such as the unit variables and structure of the corresponding data array (i.e. latitude x longitude x vertical pressure level). In addition, all files ending in "_potTpSal" provide mapped fields of potential temperature and practical salinity.