67 resultados para success models comparison
Resumo:
PURPOSE: Few studies compare the variabilities that characterize environmental (EM) and biological monitoring (BM) data. Indeed, comparing their respective variabilities can help to identify the best strategy for evaluating occupational exposure. The objective of this study is to quantify the biological variability associated with 18 bio-indicators currently used in work environments. METHOD: Intra-individual (BV(intra)), inter-individual (BV(inter)), and total biological variability (BV(total)) were quantified using validated physiologically based toxicokinetic (PBTK) models coupled with Monte Carlo simulations. Two environmental exposure profiles with different levels of variability were considered (GSD of 1.5 and 2.0). RESULTS: PBTK models coupled with Monte Carlo simulations were successfully used to predict the biological variability of biological exposure indicators. The predicted values follow a lognormal distribution, characterized by GSD ranging from 1.1 to 2.3. Our results show that there is a link between biological variability and the half-life of bio-indicators, since BV(intra) and BV(total) both decrease as the biological indicator half-lives increase. BV(intra) is always lower than the variability in the air concentrations. On an individual basis, this means that the variability associated with the measurement of biological indicators is always lower than the variability characterizing airborne levels of contaminants. For a group of workers, BM is less variable than EM for bio-indicators with half-lives longer than 10-15 h. CONCLUSION: The variability data obtained in the present study can be useful in the development of BM strategies for exposure assessment and can be used to calculate the number of samples required for guiding industrial hygienists or medical doctors in decision-making.
Resumo:
Biological monitoring of occupational exposure is characterized by important variability, due both to variability in the environment and to biological differences between workers. A quantitative description and understanding of this variability is important for a dependable application of biological monitoring. This work describes this variability,using a toxicokinetic model, for a large range of chemicals for which reference biological reference values exist. A toxicokinetic compartmental model describing both the parent compound and its metabolites was used. For each chemical, compartments were given physiological meaning. Models were elaborated based on physiological, physicochemical, and biochemical data when available, and on half-lives and central compartment concentrations when not available. Fourteen chemicals were studied (arsenic, cadmium, carbon monoxide, chromium, cobalt, ethylbenzene, ethyleneglycol monomethylether, fluorides, lead, mercury, methyl isobutyl ketone, penthachlorophenol, phenol, and toluene), representing 20 biological indicators. Occupational exposures were simulated using Monte Carlo techniques with realistic distributions of both individual physiological parameters and exposure conditions. Resulting biological indicator levels were then analyzed to identify the contribution of environmental and biological variability to total variability. Comparison of predicted biological indicator levels with biological exposure limits showed a high correlation with the model for 19 out of 20 indicators. Variability associated with changes in exposure levels (GSD of 1.5 and 2.0) is shown to be mainly influenced by the kinetics of the biological indicator. Thus, with regard to variability, we can conclude that, for the 14 chemicals modeled, biological monitoring would be preferable to air monitoring. For short half-lives (less than 7 hr), this is very similar to the environmental variability. However, for longer half-lives, estimated variability decreased. [Supplementary materials are available for this article. Go to the publisher's online edition of Journal of Occupational and Environmental Hygiene for the following free supplemental resource: tables detailing the CBTK models for all 14 chemicals and the symbol nomenclature that was used.] [Authors]
Resumo:
Combined positron emission tomography and computed tomography (PET/CT) scanners play a major role in medicine for in vivo imaging in an increasing number of diseases in oncology, cardiology, neurology, and psychiatry. With the advent of short-lived radioisotopes other than 18F and newer scanners, there is a need to optimize radioisotope activity and acquisition protocols, as well as to compare scanner performances on an objective basis. The Discovery-LS (D-LS) was among the first clinical PET/CT scanners to be developed and has been extensively characterized with older National Electrical Manufacturer Association (NEMA) NU 2-1994 standards. At the time of publication of the latest version of the standards (NU 2-2001) that have been adapted for whole-body imaging under clinical conditions, more recent models from the same manufacturer, i.e., Discovery-ST (D-ST) and Discovery-STE (D-STE), were commercially available. We report on the full characterization both in the two- and three-dimensional acquisition mode of the D-LS according to latest NEMA NU 2-2001 standards (spatial resolution, sensitivity, count rate performance, accuracy of count losses, and random coincidence correction and image quality), as well as a detailed comparison with the newer D-ST widely used and whose characteristics are already published.
Resumo:
BACKGROUND: Women with diabetes mellitus have an increased risk of cardiovascular disease (CVD) mortality and current treatment guidelines consider diabetes to be equivalent to existing CVD, but few data exist about the relative importance of these risk factors for total and cause-specific mortality in older women. METHODS: We studied 9704 women aged ≥65 years enrolled in a prospective cohort study (Study of Osteoporotic Fractures) during a mean follow-up of 13 years and compared all-cause, CVD and coronary heart disease (CHD) mortality among non-diabetic women without and with a prior history of CVD at baseline and diabetic women without and with a prior history of CVD. Diabetes mellitus and prior CVD (history of angina, myocardial infarction or stroke) were defined as self-report of physician diagnoses. Cause of death was adjudicated from death certificates and medical records when available (>95% deaths confirmed). Ascertainment of vital status was 99% complete. Log-rank tests for the rates of death and multivariate Cox hazard models adjusted for age, smoking, physical activity, systolic blood pressure, waist girth and education were used to compare mortality among the four groups with non-diabetic women without CVD as the referent group. Results are reported as adjusted hazard ratios (HR) with 95% confidence intervals (CI). RESULTS: At baseline mean age was 71.7±5.3 years, 7.0% reported diabetes mellitus and 14.5% reported prior CVD. 4257 women died during follow-up, 36.6% were attributed to CVD. The incidence of CVD death per 1000 person-years was 9.9 and 21.6 among non-diabetic women without and with CVD, respectively, and 23.8 and 33.3 among diabetic women without and with CVD, respectively. Compared to nondiabetic women without prior CVD, the risk of CVD mortality was elevated among both non-diabetic women with CVD (HR=1.82, CI: 1.60-2.07, P<0.001) and diabetic women without prior CVD (HR=2.24, CI: 1.87-2.69, P<0.001). CVD mortality was highest among diabetic women with CVD (HR=3.41, CI: 2.61-4.45, P<0.001). Compared to non-diabetic women with CVD, diabetic women without prior CVD had a significantly higher adjusted HR for total and CVD mortality (P<0.001 and P<0.05 respectively). CHD mortality did not differ significantly between non-diabetic women with CVD and diabetic women without prior CVD. CONCLUSION: Older diabetic women without prior CVD have a higher risk of all-cause and CVD mortality and a similar risk of CHD mortality compared to non-diabetic women with pre-existing CVD. For older women, these data support the equivalence of prior CVD and diabetes mellitus in current guidelines for the prevention of CVD.
Resumo:
Background: The type of anesthesia to be used for total hip arthroplasty (THA) is still a matter of debate. We compared the occurrence of per- and post-anesthesia incidents in patients receiving either general (GA) or regional anesthesia (RA). Methods: We used data from 29 hospitals, routinely collected in the Anaesthesia Databank Switzerland register between January 2001 and December 2003. We used multi-level logistic regression models. Results: There were more per- and post-anesthesia incidents under GA compared to RA (35.1% vs 32.7 %, n = 3191, and 23.1% vs 19.4%, n = 3258, respectively). In multi-level logistic regression analysis, RA was significantly associated with a lower incidence of per-anesthetic problems, especially hypertension, compared with GA. During the post-anesthetic period, RA was also less associated with pain. Conversely, RA was more associated with post-anesthetic hypotension, especially for epidural technique. In addition, age and ASA were more associated with incidents under GA compared to RA. Men were more associated with per-anesthetic problems under RA compared to GA. Whereas increased age (>67), gender (male), and ASA were linked with the choice of RA, we noticed that this choice depended also on hospital practices after we adjusted for the other variables. Conclusions: Compared to RA, GA was associated with an increased proportion of per- and post-anesthesia incidents. Although this study is only observational, it is rooted in daily practice. Whereas RA might be routinely proposed, GA might be indicated because of contraindications to RA, patients' preferences or other surgical or anaesthesiology related reasons. Finally, the choice of a type of anesthesia seems to depend on local practices that may differ between hospitals.
Resumo:
BACKGROUND: Multiple risk prediction models have been validated in all-age patients presenting with acute coronary syndrome (ACS) and treated with percutaneous coronary intervention (PCI); however, they have not been validated specifically in the elderly. METHODS: We calculated the GRACE (Global Registry of Acute Coronary Events) score, the logistic EuroSCORE, the AMIS (Acute Myocardial Infarction Swiss registry) score, and the SYNTAX (Synergy between Percutaneous Coronary Intervention with TAXUS and Cardiac Surgery) score in a consecutive series of 114 patients ≥75 years presenting with ACS and treated with PCI within 24 hours of hospital admission. Patients were stratified according to score tertiles and analysed retrospectively by comparing the lower/mid tertiles as an aggregate group with the higher tertile group. The primary endpoint was 30-day mortality. Secondary endpoints were the composite of death and major adverse cardiovascular events (MACE) at 30 days, and 1-year MACE-free survival. Model discrimination ability was assessed using the area under receiver operating characteristic curve (AUC). RESULTS: Thirty-day mortality was higher in the upper tertile compared with the aggregate lower/mid tertiles according to the logistic EuroSCORE (42% vs 5%; odds ratio [OR] = 14, 95% confidence interval [CI] = 4-48; p <0.001; AUC = 0.79), the GRACE score (40% vs 4%; OR = 17, 95% CI = 4-64; p <0.001; AUC = 0.80), the AMIS score (40% vs 4%; OR = 16, 95% CI = 4-63; p <0.001; AUC = 0.80), and the SYNTAX score (37% vs 5%; OR = 11, 95% CI = 3-37; p <0.001; AUC = 0.77). CONCLUSIONS: In elderly patients presenting with ACS and referred to PCI within 24 hours of admission, the GRACE score, the EuroSCORE, the AMIS score, and the SYNTAX score predicted 30 day mortality. The predictive value of clinical scores was improved by using them in combination.
Resumo:
Background: The type of anesthesia to be used for total hip arthroplasty (THA) is still a matter of debate. We compared the occurrence of per- and post-anesthesia incidents in patients receiving either general (GA) or regional anesthesia (RA). Methods: We used data from 29 hospitals, routinely collected in the Anaesthesia Databank Switzerland register between January 2001 and December 2003. We used multi-level logistic regression models. Results: There were more per- and post-anesthesia incidents under GA compared to RA (35.1% vs 32.7 %, n = 3191, and 23.1% vs 19.4%, n = 3258, respectively). In multi-level logistic regression analysis, RA was significantly associated with a lower incidence of per-anesthetic problems, especially hypertension, compared with GA. During the post-anesthetic period, RA was also less associated with pain. Conversely, RA was more associated with post-anesthetic hypotension, especially for epidural technique. In addition, age and ASA were more associated with incidents under GA compared to RA. Men were more associated with per-anesthetic problems under RA compared to GA. Whereas increased age (>67), gender (male), and ASA were linked with the choice of RA, we noticed that this choice depended also on hospital practices after we adjusted for the other variables. Conclusions: Compared to RA, GA was associated with an increased proportion of per- and post-anesthesia incidents. Although this study is only observational, it is rooted in daily practice. Whereas RA might be routinely proposed, GA might be indicated because of contraindications to RA, patients' preferences or other surgical or anaesthesiology related reasons. Finally, the choice of a type of anesthesia seems to depend on local practices that may differ between hospitals.
Resumo:
OBJECTIVE: Evaluation of the quantitative antibiogram as an epidemiological tool for the prospective typing of methicillin-resistant Staphylococcus aureus (MRSA), and comparison with ribotyping. METHODS: The method is based on the multivariate analysis of inhibition zone diameters of antibiotics in disk diffusion tests. Five antibiotics were used (erythromycin, clindamycin, cotrimoxazole, gentamicin, and ciprofloxacin). Ribotyping was performed using seven restriction enzymes (EcoRV, HindIII, KpnI, PstI, EcoRI, SfuI, and BamHI). SETTING: 1,000-bed tertiary university medical center. RESULTS: During a 1-year period, 31 patients were found to be infected or colonized with MRSA. Cluster analysis of antibiogram data showed nine distinct antibiotypes. Four antibiotypes were isolated from multiple patients (2, 4, 7, and 13, respectively). Five additional antibiotypes were isolated from the remaining five patients. When analyzed with respect to the epidemiological data, the method was found to be equivalent to ribotyping. Among 206 staff members who were screened, six were carriers of MRSA. Both typing methods identified concordant of MRSA types in staff members and in the patients under their care. CONCLUSIONS: The quantitative antibiogram was found to be equivalent to ribotyping as an epidemiological tool for typing of MRSA in our setting. Thus, this simple, rapid, and readily available method appears to be suitable for the prospective surveillance and control of MRSA for hospitals that do not have molecular typing facilities and in which MRSA isolates are not uniformly resistant or susceptible to the antibiotics tested.
Resumo:
Developmental constraints have been postulated to limit the space of feasible phenotypes and thus shape animal evolution. These constraints have been suggested to be the strongest during either early or mid-embryogenesis, which corresponds to the early conservation model or the hourglass model, respectively. Conflicting results have been reported, but in recent studies of animal transcriptomes the hourglass model has been favored. Studies usually report descriptive statistics calculated for all genes over all developmental time points. This introduces dependencies between the sets of compared genes and may lead to biased results. Here we overcome this problem using an alternative modular analysis. We used the Iterative Signature Algorithm to identify distinct modules of genes co-expressed specifically in consecutive stages of zebrafish development. We then performed a detailed comparison of several gene properties between modules, allowing for a less biased and more powerful analysis. Notably, our analysis corroborated the hourglass pattern at the regulatory level, with sequences of regulatory regions being most conserved for genes expressed in mid-development but not at the level of gene sequence, age, or expression, in contrast to some previous studies. The early conservation model was supported with gene duplication and birth that were the most rare for genes expressed in early development. Finally, for all gene properties, we observed the least conservation for genes expressed in late development or adult, consistent with both models. Overall, with the modular approach, we showed that different levels of molecular evolution follow different patterns of developmental constraints. Thus both models are valid, but with respect to different genomic features.
Resumo:
The method of instrumental variable (referred to as Mendelian randomization when the instrument is a genetic variant) has been initially developed to infer on a causal effect of a risk factor on some outcome of interest in a linear model. Adapting this method to nonlinear models, however, is known to be problematic. In this paper, we consider the simple case when the genetic instrument, the risk factor, and the outcome are all binary. We compare via simulations the usual two-stages estimate of a causal odds-ratio and its adjusted version with a recently proposed estimate in the context of a clinical trial with noncompliance. In contrast to the former two, we confirm that the latter is (under some conditions) a valid estimate of a causal odds-ratio defined in the subpopulation of compliers, and we propose its use in the context of Mendelian randomization. By analogy with a clinical trial with noncompliance, compliers are those individuals for whom the presence/absence of the risk factor X is determined by the presence/absence of the genetic variant Z (i.e., for whom we would observe X = Z whatever the alleles randomly received at conception). We also recall and illustrate the huge variability of instrumental variable estimates when the instrument is weak (i.e., with a low percentage of compliers, as is typically the case with genetic instruments for which this proportion is frequently smaller than 10%) where the inter-quartile range of our simulated estimates was up to 18 times higher compared to a conventional (e.g., intention-to-treat) approach. We thus conclude that the need to find stronger instruments is probably as important as the need to develop a methodology allowing to consistently estimate a causal odds-ratio.
Resumo:
Aim: Climatic niche modelling of species and community distributions implicitly assumes strong and constant climatic determinism across geographic space. This assumption had however never been tested so far. We tested it by assessing how stacked-species distribution models (S-SDMs) perform for predicting plant species assemblages along elevation. Location: Western Swiss Alps. Methods: Using robust presence-absence data, we first assessed the ability of topo-climatic S-SDMs to predict plant assemblages in a study area encompassing a 2800 m wide elevation gradient. We then assessed the relationships among several evaluation metrics and trait-based tests of community assembly rules. Results: The standard errors of individual SDMs decreased significantly towards higher elevations. Overall, the S-SDM overpredicted far more than they underpredicted richness and could not reproduce the humpback curve along elevation. Overprediction was greater at low and mid-range elevations in absolute values but greater at high elevations when standardised by the actual richness. Looking at species composition, the evaluation metrics accounting for both the presence and absence of species (overall prediction success and kappa) or focusing on correctly predicted absences (specificity) increased with increasing elevation, while the metrics focusing on correctly predicted presences (Jaccard index and sensitivity) decreased. The best overall evaluation - as driven by specificity - occurred at high elevation where species assemblages were shown to be under significant environmental filtering of small plants. In contrast, the decreased overall accuracy in the lowlands was associated with functional patterns representing any type of assembly rule (environmental filtering, limiting similarity or null assembly). Main Conclusions: Our study reveals interesting patterns of change in S-SDM errors with changes in assembly rules along elevation. Yet, significant levels of assemblage prediction errors occurred throughout the gradient, calling for further improvement of SDMs, e.g., by adding key environmental filters that act at fine scales and developing approaches to account for variations in the influence of predictors along environmental gradients.
Resumo:
Exploratory and confirmatory factor analyses reported in the French technical manual of the WISC-IV provides evidence supporting a structure with four indices: Verbal Comprehension (VCI), Perceptual Reasoning (PRI), Working Memory (WMI), and Processing Speed (PSI). Although the WISC-IV is more attuned to contemporary theory, it is still not in total accordance with the dominant theory: the Cattell-Horn-Carroll (CHC) theory of cognitive ability. This study was designed to determine whether the French WISC-IV is better described with the four-factor solution or whether an alternative model based on the CHC theory is more appropriate. The intercorrelations matrix reported in the French technical manual was submitted to confirmatory factor analysis. A comparison of competing models suggests that a model based on the CHC theory fits the data better than the current WISC-IV structure. It appears that the French WISC-IV in fact measures six factors: crystallized intelligence (Gc), fluid intelligence (Gf), short-term memory (Gsm), processing speed (Gs), quantitative knowledge (Gq), and visual processing (Gv). We recommend that clinicians interpret the subtests of the French WISC-IV in relation to this CHC model in addition to the four indices.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
Several methods and algorithms have recently been proposed that allow for the systematic evaluation of simple neuron models from intracellular or extracellular recordings. Models built in this way generate good quantitative predictions of the future activity of neurons under temporally structured current injection. It is, however, difficult to compare the advantages of various models and algorithms since each model is designed for a different set of data. Here, we report about one of the first attempts to establish a benchmark test that permits a systematic comparison of methods and performances in predicting the activity of rat cortical pyramidal neurons. We present early submissions to the benchmark test and discuss implications for the design of future tests and simple neurons models
Resumo:
PURPOSE OF REVIEW: HIV targets primary CD4(+) T cells. The virus depends on the physiological state of its target cells for efficient replication, and, in turn, viral infection perturbs the cellular state significantly. Identifying the virus-host interactions that drive these dynamic changes is important for a better understanding of viral pathogenesis and persistence. The present review focuses on experimental and computational approaches to study the dynamics of viral replication and latency. RECENT FINDINGS: It was recently shown that only a fraction of the inducible latently infected reservoirs are successfully induced upon stimulation in ex-vivo models while additional rounds of stimulation make allowance for reactivation of more latently infected cells. This highlights the potential role of treatment duration and timing as important factors for successful reactivation of latently infected cells. The dynamics of HIV productive infection and latency have been investigated using transcriptome and proteome data. The cellular activation state has shown to be a major determinant of viral reactivation success. Mathematical models of latency have been used to explore the dynamics of the latent viral reservoir decay. SUMMARY: Timing is an important component of biological interactions. Temporal analyses covering aspects of viral life cycle are essential for gathering a comprehensive picture of HIV interaction with the host cell and untangling the complexity of latency. Understanding the dynamic changes tipping the balance between success and failure of HIV particle production might be key to eradicate the viral reservoir.