81 resultados para finite and infinitesimal models
Resumo:
BACKGROUND: Estimates of the decrease in CD4(+) cell counts in untreated patients with human immunodeficiency virus (HIV) infection are important for patient care and public health. We analyzed CD4(+) cell count decreases in the Cape Town AIDS Cohort and the Swiss HIV Cohort Study. METHODS: We used mixed-effects models and joint models that allowed for the correlation between CD4(+) cell count decreases and survival and stratified analyses by the initial cell count (50-199, 200-349, 350-499, and 500-750 cells/microL). Results are presented as the mean decrease in CD4(+) cell count with 95% confidence intervals (CIs) during the first year after the initial CD4(+) cell count. RESULTS: A total of 784 South African (629 nonwhite) and 2030 Swiss (218 nonwhite) patients with HIV infection contributed 13,388 CD4(+) cell counts. Decreases in CD4(+) cell count were steeper in white patients, patients with higher initial CD4(+) cell counts, and older patients. Decreases ranged from a mean of 38 cells/microL (95% CI, 24-54 cells/microL) in nonwhite patients from the Swiss HIV Cohort Study 15-39 years of age with an initial CD4(+) cell count of 200-349 cells/microL to a mean of 210 cells/microL (95% CI, 143-268 cells/microL) in white patients in the Cape Town AIDS Cohort > or =40 years of age with an initial CD4(+) cell count of 500-750 cells/microL. CONCLUSIONS: Among both patients from Switzerland and patients from South Africa, CD4(+) cell count decreases were greater in white patients with HIV infection than they were in nonwhite patients with HIV infection.
Resumo:
OBJECTIVE: Hierarchical modeling has been proposed as a solution to the multiple exposure problem. We estimate associations between metabolic syndrome and different components of antiretroviral therapy using both conventional and hierarchical models. STUDY DESIGN AND SETTING: We use discrete time survival analysis to estimate the association between metabolic syndrome and cumulative exposure to 16 antiretrovirals from four drug classes. We fit a hierarchical model where the drug class provides a prior model of the association between metabolic syndrome and exposure to each antiretroviral. RESULTS: One thousand two hundred and eighteen patients were followed for a median of 27 months, with 242 cases of metabolic syndrome (20%) at a rate of 7.5 cases per 100 patient years. Metabolic syndrome was more likely to develop in patients exposed to stavudine, but was less likely to develop in those exposed to atazanavir. The estimate for exposure to atazanavir increased from hazard ratio of 0.06 per 6 months' use in the conventional model to 0.37 in the hierarchical model (or from 0.57 to 0.81 when using spline-based covariate adjustment). CONCLUSION: These results are consistent with trials that show the disadvantage of stavudine and advantage of atazanavir relative to other drugs in their respective classes. The hierarchical model gave more plausible results than the equivalent conventional model.
Resumo:
High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments
Resumo:
SUMMARY A recent systematic review demonstrated that, overall, orthodontic treatment might result in a small worsening of periodontal status. The aim of this retrospective study was to test the hypothesis that a change of mandibular incisor inclination promotes development of labial gingival recessions. One hundred and seventy-nine subjects who met the following inclusion criteria were selected: age 11-14 years at start of orthodontic treatment (TS), bonded retainer placed immediately after treatment (T₀), dental casts and lateral cephalograms available pre-treatment (TS), post-treatment (T₀), 2 years post-treatment (T₂), and 5 years post-treatment (T₅). Depending on the change of lower incisor inclination during treatment (ΔInc_Incl), the sample was divided into three groups: Retro (N = 34; ΔInc_Incl ≤ -1 degree), Stable (N = 22; ΔInc_Incl > -1 degree and ≤1 degree), and Pro (N = 123; ΔInc_Incl > 1 degree). Clinical crown heights of mandibular incisors and the presence of gingival recessions in this region were assessed on plaster models. Fisher's exact tests, one-way analysis of variance, and regression models were used for analysis of inter-group differences. The mean increase of clinical crown heights (T₀ to T₅) of mandibular incisors ranged from 0.6 to 0.91 mm in the Retro, Stable, and Pro groups, respectively; the difference was not significant (P = 0.534). At T₅, gingival recessions were present in 8.8, 4.5, and 16.3 per cent patients from the Retro, Stable, and Pro groups, respectively. The difference was not significant (P = 0.265). The change of lower incisors inclination during treatment did not affect development of labial gingival recessions in this patient group.
Resumo:
OBJECTIVES This study sought to validate the Logistic Clinical SYNTAX (Synergy Between Percutaneous Coronary Intervention With Taxus and Cardiac Surgery) score in patients with non-ST-segment elevation acute coronary syndromes (ACS), in order to further legitimize its clinical application. BACKGROUND The Logistic Clinical SYNTAX score allows for an individualized prediction of 1-year mortality in patients undergoing contemporary percutaneous coronary intervention. It is composed of a "Core" Model (anatomical SYNTAX score, age, creatinine clearance, and left ventricular ejection fraction), and "Extended" Model (composed of an additional 6 clinical variables), and has previously been cross validated in 7 contemporary stent trials (>6,000 patients). METHODS One-year all-cause death was analyzed in 2,627 patients undergoing percutaneous coronary intervention from the ACUITY (Acute Catheterization and Urgent Intervention Triage Strategy) trial. Mortality predictions from the Core and Extended Models were studied with respect to discrimination, that is, separation of those with and without 1-year all-cause death (assessed by the concordance [C] statistic), and calibration, that is, agreement between observed and predicted outcomes (assessed with validation plots). Decision curve analyses, which weight the harms (false positives) against benefits (true positives) of using a risk score to make mortality predictions, were undertaken to assess clinical usefulness. RESULTS In the ACUITY trial, the median SYNTAX score was 9.0 (interquartile range 5.0 to 16.0); approximately 40% of patients had 3-vessel disease, 29% diabetes, and 85% underwent drug-eluting stent implantation. Validation plots confirmed agreement between observed and predicted mortality. The Core and Extended Models demonstrated substantial improvements in the discriminative ability for 1-year all-cause death compared with the anatomical SYNTAX score in isolation (C-statistics: SYNTAX score: 0.64, 95% confidence interval [CI]: 0.56 to 0.71; Core Model: 0.74, 95% CI: 0.66 to 0.79; Extended Model: 0.77, 95% CI: 0.70 to 0.83). Decision curve analyses confirmed the increasing ability to correctly identify patients who would die at 1 year with the Extended Model versus the Core Model versus the anatomical SYNTAX score, over a wide range of thresholds for mortality risk predictions. CONCLUSIONS Compared to the anatomical SYNTAX score alone, the Core and Extended Models of the Logistic Clinical SYNTAX score more accurately predicted individual 1-year mortality in patients presenting with non-ST-segment elevation acute coronary syndromes undergoing percutaneous coronary intervention. These findings support the clinical application of the Logistic Clinical SYNTAX score.
Resumo:
Background: WHO's 2013 revisions to its Consolidated Guidelines on antiretroviral drugs recommend routine viral load monitoring, rather than clinical or immunological monitoring, as the preferred monitoring approach on the basis of clinical evidence. However, HIV programmes in resource-limited settings require guidance on the most cost-effective use of resources in view of other competing priorities such as expansion of antiretroviral therapy coverage. We assessed the cost-effectiveness of alternative patient monitoring strategies. Methods: We evaluated a range of monitoring strategies, including clinical, CD4 cell count, and viral load monitoring, alone and together, at different frequencies and with different criteria for switching to second-line therapies. We used three independently constructed and validated models simultaneously. We estimated costs on the basis of resource use projected in the models and associated unit costs; we quantified impact as disability-adjusted life years (DALYs) averted. We compared alternatives using incremental cost-effectiveness analysis. Findings: All models show that clinical monitoring delivers significant benefit compared with a hypothetical baseline scenario with no monitoring or switching. Regular CD4 cell count monitoring confers a benefit over clinical monitoring alone, at an incremental cost that makes it affordable in more settings than viral load monitoring, which is currently more expensive. Viral load monitoring without CD4 cell count every 6—12 months provides the greatest reductions in morbidity and mortality, but incurs a high cost per DALY averted, resulting in lost opportunities to generate health gains if implemented instead of increasing antiretroviral therapy coverage or expanding antiretroviral therapy eligibility. Interpretation: The priority for HIV programmes should be to expand antiretroviral therapy coverage, firstly at CD4 cell count lower than 350 cells per μL, and then at a CD4 cell count lower than 500 cells per μL, using lower-cost clinical or CD4 monitoring. At current costs, viral load monitoring should be considered only after high antiretroviral therapy coverage has been achieved. Point-of-care technologies and other factors reducing costs might make viral load monitoring more affordable in future. Funding: Bill & Melinda Gates Foundation, WHO.
Resumo:
Low self-esteem and depression are strongly related, but there is not yet consistent evidence on the nature of the relation. Whereas the vulnerability model states that low self-esteem contributes to depression, the scar model states that depression erodes self-esteem. Furthermore, it is unknown whether the models are specific for depression or whether they are also valid for anxiety. We evaluated the vulnerability and scar models of low self-esteem and depression, and low self-esteem and anxiety, by meta-analyzing the available longitudinal data (covering 77 studies on depression and 18 studies on anxiety). The mean age of the samples ranged from childhood to old age. In the analyses, we used a random-effects model and examined prospective effects between the variables, controlling for prior levels of the predicted variables. For depression, the findings supported the vulnerability model: The effect of self-esteem on depression (β = -.16) was significantly stronger than the effect of depression on self-esteem (β = -.08). In contrast, the effects between low self-esteem and anxiety were relatively balanced: Self-esteem predicted anxiety with β = -.10, and anxiety predicted self-esteem with β = -.08. Moderator analyses were conducted for the effect of low self-esteem on depression; these suggested that the effect is not significantly influenced by gender, age, measures of self-esteem and depression, or time lag between assessments. If future research supports the hypothesized causality of the vulnerability effect of low self-esteem on depression, interventions aimed at increasing self-esteem might be useful in reducing the risk of depression.
Resumo:
With improving clinical CT scanning technology, the accuracy of CT-based finite element (FE) models of the human skeleton may be ameliorated by an enhanced description of apparent level bone mechanical properties. Micro-finite element (μFE) modeling can be used to study the apparent elastic behavior of human cancellous bone. In this study, samples from the femur, radius and vertebral body were investigated to evaluate the predictive power of morphology–elasticity relationships and to compare them across different anatomical regions. μFE models of 701 trabecular bone cubes with a side length of 5.3 mm were analyzed using kinematic boundary conditions. Based on the FE results, four morphology–elasticity models using bone volume fraction as well as full, limited or no fabric information were calibrated for each anatomical region. The 5 parameter Zysset–Curnier model using full fabric information showed excellent predictive power with coefficients of determination ( r2adj ) of 0.98, 0.95 and 0.94 of the femur, radius and vertebra data, respectively, with mean total norm errors between 14 and 20%. A constant orthotropy model and a constant transverse isotropy model, where the elastic anisotropy is defined by the model parameters, yielded coefficients of determination between 0.90 and 0.98 with total norm errors between 16 and 25%. Neglecting fabric information and using an isotropic model led to r2adj between 0.73 and 0.92 with total norm errors between 38 and 49%. A comparison of the model regressions revealed minor but significant (p<0.01) differences for the fabric–elasticity model parameters calibrated for the different anatomical regions. The proposed models and identified parameters can be used in future studies to compute the apparent elastic properties of human cancellous bone for homogenized FE models.
Resumo:
BACKGROUND: Research on comorbidity of psychiatric disorders identifies broad superordinate dimensions as underlying structure of psychopathology. While a syndrome-level approach informs diagnostic systems, a symptom-level approach is more likely to represent the dimensional components within existing diagnostic categories. It may capture general emotional, cognitive or physiological processes as underlying liabilities of different disorders and thus further develop dimensional-spectrum models of psychopathology. METHODS: Exploratory and confirmatory factor analyses were used to examine the structure of psychopathological symptoms assessed with the Brief Symptom Inventory in two outpatient samples (n=3171), including several correlated-factors and bifactor models. The preferred models were correlated with DSM-diagnoses. RESULTS: A model containing eight correlated factors for depressed mood, phobic fear, aggression, suicidal ideation, nervous tension, somatic symptoms, information processing deficits, and interpersonal insecurity, as well a bifactor model fit the data best. Distinct patterns of correlations with DSM-diagnoses identified a) distress-related disorders, i.e., mood disorders, PTSD, and personality disorders, which were associated with all correlated factors as well as the underlying general distress factor; b) anxiety disorders with more specific patterns of correlations; and c) disorders defined by behavioural or somatic dysfunctions, which were characterised by non-significant or negative correlations with most factors. CONCLUSIONS: This study identified emotional, somatic, cognitive, and interpersonal components of psychopathology as transdiagnostic psychopathological liabilities. These components can contribute to a more accurate description and taxonomy of psychopathology, may serve as phenotypic constructs for further aetiological research, and can inform the development of tailored general and specific interventions to treat mental disorders.
Resumo:
Methane is an important greenhouse gas, responsible for about 20 of the warming induced by long-lived greenhouse gases since pre-industrial times. By reacting with hydroxyl radicals, methane reduces the oxidizing capacity of the atmosphere and generates ozone in the troposphere. Although most sources and sinks of methane have been identified, their relative contributions to atmospheric methane levels are highly uncertain. As such, the factors responsible for the observed stabilization of atmospheric methane levels in the early 2000s, and the renewed rise after 2006, remain unclear. Here, we construct decadal budgets for methane sources and sinks between 1980 and 2010, using a combination of atmospheric measurements and results from chemical transport models, ecosystem models, climate chemistry models and inventories of anthropogenic emissions. The resultant budgets suggest that data-driven approaches and ecosystem models overestimate total natural emissions. We build three contrasting emission scenarios � which differ in fossil fuel and microbial emissions � to explain the decadal variability in atmospheric methane levels detected, here and in previous studies, since 1985. Although uncertainties in emission trends do not allow definitive conclusions to be drawn, we show that the observed stabilization of methane levels between 1999 and 2006 can potentially be explained by decreasing-to-stable fossil fuel emissions, combined with stable-to-increasing microbial emissions. We show that a rise in natural wetland emissions and fossil fuel emissions probably accounts for the renewed increase in global methane levels after 2006, although the relative contribution of these two sources remains uncertain.
Resumo:
This paper is concerned with the modelling of storage configurations for intermediate products in process industries. Those models form the basis of algorithms for scheduling chemical production plants. Different storage capacity settings (unlimited, finite, and no intermediate storage), storage homogeneity settings (dedicated and shared storage), and storage time settings (unlimited, finite, and no wait) are considered. We discuss a classification of storage constraints in batch scheduling and show how those constraints can be integrated into a general production scheduling model that is based on the concept of cumulative resources.
Resumo:
Scholars have increasingly theorized, and debated, the decision by states to create and delegate authority to international courts, as well as the subsequent autonomy and behavior of those courts, with principal–agent and trusteeship models disagreeing on the nature and extent of states’ influence on international judges. This article formulates and tests a set of principal–agent hypotheses about the ways in which, and the conditions under which, member states are able use their powers of judicial nomination and appointment to influence the endogenous preferences of international judges. The empirical analysis surveys the record of all judicial appointments to the Appellate Body (AB) of the World Trade Organization over a 15-year period. We present a view of an AB appointment process that, far from representing a pure search for expertise, is deeply politicized and offers member-state principals opportunities to influence AB members ex ante and possibly ex post. We further demonstrate that the AB nomination process has become progressively more politicized over time as member states, responding to earlier and controversial AB decisions, became far more concerned about judicial activism and more interested in the substantive opinions of AB candidates, systematically championing candidates whose views on key issues most closely approached their own, and opposing candidates perceived to be activist or biased against their substantive preferences. Although specific to the WTO, our theory and findings have implications for the judicial politics of a large variety of global and regional international courts and tribunals.
Resumo:
Digital technologies have profoundly changed not only the ways we create, distribute, access, use and re-use information but also many of the governance structures we had in place. Overall, "older" institutions at all governance levels have grappled and often failed to master the multi-faceted and multi-directional issues of the Internet. Regulatory entrepreneurs have yet to discover and fully mobilize the potential of digital technologies as an influential factor impacting upon the regulability of the environment and as a potential regulatory tool in themselves. At the same time, we have seen a deterioration of some public spaces and lower prioritization of public objectives, when strong private commercial interests are at play, such as most tellingly in the field of copyright. Less tangibly, private ordering has taken hold and captured through contracts spaces, previously regulated by public law. Code embedded in technology often replaces law. Non-state action has in general proliferated and put serious pressure upon conventional state-centered, command-and-control models. Under the conditions of this "messy" governance, the provision of key public goods, such as freedom of information, has been made difficult or is indeed jeopardized.The grand question is how can we navigate this complex multi-actor, multi-issue space and secure the attainment of fundamental public interest objectives. This is also the question that Ian Brown and Chris Marsden seek to answer with their book, Regulating Code, as recently published under the "Information Revolution and Global Politics" series of MIT Press. This book review critically assesses the bold effort by Brown and Marsden.
Resumo:
OBJECTIVES: The aim of the study was to assess whether prospective follow-up data within the Swiss HIV Cohort Study can be used to predict patients who stop smoking; or among smokers who stop, those who start smoking again. METHODS: We built prediction models first using clinical reasoning ('clinical models') and then by selecting from numerous candidate predictors using advanced statistical methods ('statistical models'). Our clinical models were based on literature that suggests that motivation drives smoking cessation, while dependence drives relapse in those attempting to stop. Our statistical models were based on automatic variable selection using additive logistic regression with component-wise gradient boosting. RESULTS: Of 4833 smokers, 26% stopped smoking, at least temporarily; because among those who stopped, 48% started smoking again. The predictive performance of our clinical and statistical models was modest. A basic clinical model for cessation, with patients classified into three motivational groups, was nearly as discriminatory as a constrained statistical model with just the most important predictors (the ratio of nonsmoking visits to total visits, alcohol or drug dependence, psychiatric comorbidities, recent hospitalization and age). A basic clinical model for relapse, based on the maximum number of cigarettes per day prior to stopping, was not as discriminatory as a constrained statistical model with just the ratio of nonsmoking visits to total visits. CONCLUSIONS: Predicting smoking cessation and relapse is difficult, so that simple models are nearly as discriminatory as complex ones. Patients with a history of attempting to stop and those known to have stopped recently are the best candidates for an intervention.
An Early-Warning System for Hypo-/Hyperglycemic Events Based on Fusion of Adaptive Prediction Models
Resumo:
Introduction: Early warning of future hypoglycemic and hyperglycemic events can improve the safety of type 1 diabetes mellitus (T1DM) patients. The aim of this study is to design and evaluate a hypoglycemia / hyperglycemia early warning system (EWS) for T1DM patients under sensor-augmented pump (SAP) therapy. Methods: The EWS is based on the combination of data-driven online adaptive prediction models and a warning algorithm. Three modeling approaches have been investigated: (i) autoregressive (ARX) models, (ii) auto-regressive with an output correction module (cARX) models, and (iii) recurrent neural network (RNN) models. The warning algorithm performs postprocessing of the models′ outputs and issues alerts if upcoming hypoglycemic/hyperglycemic events are detected. Fusion of the cARX and RNN models, due to their complementary prediction performances, resulted in the hybrid autoregressive with an output correction module/recurrent neural network (cARN)-based EWS. Results: The EWS was evaluated on 23 T1DM patients under SAP therapy. The ARX-based system achieved hypoglycemic (hyperglycemic) event prediction with median values of accuracy of 100.0% (100.0%), detection time of 10.0 (8.0) min, and daily false alarms of 0.7 (0.5). The respective values for the cARX-based system were 100.0% (100.0%), 17.5 (14.8) min, and 1.5 (1.3) and, for the RNN-based system, were 100.0% (92.0%), 8.4 (7.0) min, and 0.1 (0.2). The hybrid cARN-based EWS presented outperforming results with 100.0% (100.0%) prediction accuracy, detection 16.7 (14.7) min in advance, and 0.8 (0.8) daily false alarms. Conclusion: Combined use of cARX and RNN models for the development of an EWS outperformed the single use of each model, achieving accurate and prompt event prediction with few false alarms, thus providing increased safety and comfort.