72 resultados para strut and tie models
Resumo:
BACKGROUND: Research on comorbidity of psychiatric disorders identifies broad superordinate dimensions as underlying structure of psychopathology. While a syndrome-level approach informs diagnostic systems, a symptom-level approach is more likely to represent the dimensional components within existing diagnostic categories. It may capture general emotional, cognitive or physiological processes as underlying liabilities of different disorders and thus further develop dimensional-spectrum models of psychopathology. METHODS: Exploratory and confirmatory factor analyses were used to examine the structure of psychopathological symptoms assessed with the Brief Symptom Inventory in two outpatient samples (n=3171), including several correlated-factors and bifactor models. The preferred models were correlated with DSM-diagnoses. RESULTS: A model containing eight correlated factors for depressed mood, phobic fear, aggression, suicidal ideation, nervous tension, somatic symptoms, information processing deficits, and interpersonal insecurity, as well a bifactor model fit the data best. Distinct patterns of correlations with DSM-diagnoses identified a) distress-related disorders, i.e., mood disorders, PTSD, and personality disorders, which were associated with all correlated factors as well as the underlying general distress factor; b) anxiety disorders with more specific patterns of correlations; and c) disorders defined by behavioural or somatic dysfunctions, which were characterised by non-significant or negative correlations with most factors. CONCLUSIONS: This study identified emotional, somatic, cognitive, and interpersonal components of psychopathology as transdiagnostic psychopathological liabilities. These components can contribute to a more accurate description and taxonomy of psychopathology, may serve as phenotypic constructs for further aetiological research, and can inform the development of tailored general and specific interventions to treat mental disorders.
Resumo:
Methane is an important greenhouse gas, responsible for about 20 of the warming induced by long-lived greenhouse gases since pre-industrial times. By reacting with hydroxyl radicals, methane reduces the oxidizing capacity of the atmosphere and generates ozone in the troposphere. Although most sources and sinks of methane have been identified, their relative contributions to atmospheric methane levels are highly uncertain. As such, the factors responsible for the observed stabilization of atmospheric methane levels in the early 2000s, and the renewed rise after 2006, remain unclear. Here, we construct decadal budgets for methane sources and sinks between 1980 and 2010, using a combination of atmospheric measurements and results from chemical transport models, ecosystem models, climate chemistry models and inventories of anthropogenic emissions. The resultant budgets suggest that data-driven approaches and ecosystem models overestimate total natural emissions. We build three contrasting emission scenarios � which differ in fossil fuel and microbial emissions � to explain the decadal variability in atmospheric methane levels detected, here and in previous studies, since 1985. Although uncertainties in emission trends do not allow definitive conclusions to be drawn, we show that the observed stabilization of methane levels between 1999 and 2006 can potentially be explained by decreasing-to-stable fossil fuel emissions, combined with stable-to-increasing microbial emissions. We show that a rise in natural wetland emissions and fossil fuel emissions probably accounts for the renewed increase in global methane levels after 2006, although the relative contribution of these two sources remains uncertain.
Resumo:
Scholars have increasingly theorized, and debated, the decision by states to create and delegate authority to international courts, as well as the subsequent autonomy and behavior of those courts, with principal–agent and trusteeship models disagreeing on the nature and extent of states’ influence on international judges. This article formulates and tests a set of principal–agent hypotheses about the ways in which, and the conditions under which, member states are able use their powers of judicial nomination and appointment to influence the endogenous preferences of international judges. The empirical analysis surveys the record of all judicial appointments to the Appellate Body (AB) of the World Trade Organization over a 15-year period. We present a view of an AB appointment process that, far from representing a pure search for expertise, is deeply politicized and offers member-state principals opportunities to influence AB members ex ante and possibly ex post. We further demonstrate that the AB nomination process has become progressively more politicized over time as member states, responding to earlier and controversial AB decisions, became far more concerned about judicial activism and more interested in the substantive opinions of AB candidates, systematically championing candidates whose views on key issues most closely approached their own, and opposing candidates perceived to be activist or biased against their substantive preferences. Although specific to the WTO, our theory and findings have implications for the judicial politics of a large variety of global and regional international courts and tribunals.
Resumo:
Digital technologies have profoundly changed not only the ways we create, distribute, access, use and re-use information but also many of the governance structures we had in place. Overall, "older" institutions at all governance levels have grappled and often failed to master the multi-faceted and multi-directional issues of the Internet. Regulatory entrepreneurs have yet to discover and fully mobilize the potential of digital technologies as an influential factor impacting upon the regulability of the environment and as a potential regulatory tool in themselves. At the same time, we have seen a deterioration of some public spaces and lower prioritization of public objectives, when strong private commercial interests are at play, such as most tellingly in the field of copyright. Less tangibly, private ordering has taken hold and captured through contracts spaces, previously regulated by public law. Code embedded in technology often replaces law. Non-state action has in general proliferated and put serious pressure upon conventional state-centered, command-and-control models. Under the conditions of this "messy" governance, the provision of key public goods, such as freedom of information, has been made difficult or is indeed jeopardized.The grand question is how can we navigate this complex multi-actor, multi-issue space and secure the attainment of fundamental public interest objectives. This is also the question that Ian Brown and Chris Marsden seek to answer with their book, Regulating Code, as recently published under the "Information Revolution and Global Politics" series of MIT Press. This book review critically assesses the bold effort by Brown and Marsden.
Resumo:
OBJECTIVES: The aim of the study was to assess whether prospective follow-up data within the Swiss HIV Cohort Study can be used to predict patients who stop smoking; or among smokers who stop, those who start smoking again. METHODS: We built prediction models first using clinical reasoning ('clinical models') and then by selecting from numerous candidate predictors using advanced statistical methods ('statistical models'). Our clinical models were based on literature that suggests that motivation drives smoking cessation, while dependence drives relapse in those attempting to stop. Our statistical models were based on automatic variable selection using additive logistic regression with component-wise gradient boosting. RESULTS: Of 4833 smokers, 26% stopped smoking, at least temporarily; because among those who stopped, 48% started smoking again. The predictive performance of our clinical and statistical models was modest. A basic clinical model for cessation, with patients classified into three motivational groups, was nearly as discriminatory as a constrained statistical model with just the most important predictors (the ratio of nonsmoking visits to total visits, alcohol or drug dependence, psychiatric comorbidities, recent hospitalization and age). A basic clinical model for relapse, based on the maximum number of cigarettes per day prior to stopping, was not as discriminatory as a constrained statistical model with just the ratio of nonsmoking visits to total visits. CONCLUSIONS: Predicting smoking cessation and relapse is difficult, so that simple models are nearly as discriminatory as complex ones. Patients with a history of attempting to stop and those known to have stopped recently are the best candidates for an intervention.
An Early-Warning System for Hypo-/Hyperglycemic Events Based on Fusion of Adaptive Prediction Models
Resumo:
Introduction: Early warning of future hypoglycemic and hyperglycemic events can improve the safety of type 1 diabetes mellitus (T1DM) patients. The aim of this study is to design and evaluate a hypoglycemia / hyperglycemia early warning system (EWS) for T1DM patients under sensor-augmented pump (SAP) therapy. Methods: The EWS is based on the combination of data-driven online adaptive prediction models and a warning algorithm. Three modeling approaches have been investigated: (i) autoregressive (ARX) models, (ii) auto-regressive with an output correction module (cARX) models, and (iii) recurrent neural network (RNN) models. The warning algorithm performs postprocessing of the models′ outputs and issues alerts if upcoming hypoglycemic/hyperglycemic events are detected. Fusion of the cARX and RNN models, due to their complementary prediction performances, resulted in the hybrid autoregressive with an output correction module/recurrent neural network (cARN)-based EWS. Results: The EWS was evaluated on 23 T1DM patients under SAP therapy. The ARX-based system achieved hypoglycemic (hyperglycemic) event prediction with median values of accuracy of 100.0% (100.0%), detection time of 10.0 (8.0) min, and daily false alarms of 0.7 (0.5). The respective values for the cARX-based system were 100.0% (100.0%), 17.5 (14.8) min, and 1.5 (1.3) and, for the RNN-based system, were 100.0% (92.0%), 8.4 (7.0) min, and 0.1 (0.2). The hybrid cARN-based EWS presented outperforming results with 100.0% (100.0%) prediction accuracy, detection 16.7 (14.7) min in advance, and 0.8 (0.8) daily false alarms. Conclusion: Combined use of cARX and RNN models for the development of an EWS outperformed the single use of each model, achieving accurate and prompt event prediction with few false alarms, thus providing increased safety and comfort.
Resumo:
The pregnane X receptor (PXR) has been postulated to play a role in the metabolism of α-tocopherol owing to the up-regulation of hepatic cytochrome P450 (P450) 3A in human cell lines and murine models after α-tocopherol treatment. However, in vivo studies confirming the role of PXR in α-tocopherol metabolism in humans presents significant difficulties and has not been performed. PXR-humanized (hPXR), wild-type, and Pxr-null mouse models were used to determine whether α-tocopherol metabolism is influenced by species-specific differences in PXR function in vivo. No significant difference in the concentration of the major α-tocopherol metabolites was observed among the hPXR, wild-type, and Pxr-null mice through mass spectrometry-based metabolomics. Gene expression analysis revealed significantly increased expression of Cyp3a11 as well as several other P450s only in wild-type mice, suggesting species-specificity for α-tocopherol activation of PXR. Luciferase reporter assay confirmed activation of mouse PXR by α-tocopherol. Analysis of the Cyp2c family of genes revealed increased expression of Cyp2c29, Cyp2c37, and Cyp2c55 in wild-type, hPXR, and Pxr-null mice, which suggests PXR-independent induction of Cyp2c gene expression. This study revealed that α-tocopherol is a partial agonist of PXR and that PXR is necessary for Cyp3a induction by α-tocopherol. The implications of a novel role for α-tocopherol in Cyp2c gene regulation are also discussed.
Resumo:
Denmark and Switzerland are small and successful countries with exceptionally content populations. However, they have very different political institutions and economic models. They have followed the general tendency in the West toward economic convergence, but both countries have managed to stay on top. They both have a strong liberal tradition, but otherwise their economic strategies are a welfare state model for Denmark and a safe haven model for Switzerland. The Danish welfare state is tax-based, while the expenditures for social welfare are insurance-based in Switzerland. The political institutions are a multiparty unicameral system in Denmark, and a permanent coalition system with many referenda and strong local government in Switzerland. Both approaches have managed to ensure smoothly working political power-sharing and economic systems that allocate resources in a fairly efficient way. To date, they have also managed to adapt the economies to changes in the external environment with a combination of stability and flexibility.
Resumo:
The Youngest Toba Tuff (YTT, erupted ca. 74 ka ago) is a distinctive and widespread tephra marker across south and southeast Asia. The climatic, human and environmental consequences of the YTT eruption are widely debated. Although a considerable body of geochemical data is available for this unit, there has not been a systematic study of the variability of the ash geochemistry. Intrinsic (magmatic) and extrinsic (post-depositional) chemical variations bring fundamental information regarding the petrogenesis of the magma, the distribution of the tephra and the interaction between the ash and the receiving environment. Considering the importance of the geochemistry of the YTT for stratigraphic correlations and eruptive models, it is central to the YTT debate to quantify and interpret such variations. Here we collate all published geochemical data on the YTT glass, including analyses from 68 sites described in the literature and three new samples. Two principal sources of chemical variation are investigated: (i) compositional zonation of the magma reservoir, and (ii) post-depositional alteration. Post-depositional leaching is responsible for up to ca. 11% differences in Na2O/K2O and ca. 1% differences in SiO2/Al2O3 ratios in YTT glass from marine sites. Continental tephra are 2% higher in Na2O/K2O and 3% higher in SiO2/Al2O3 respect to the marine tephra. We interpret such post-depositional glass alteration as related to seawater induced alkali migration in marine environments, or to site-specific water pH. Crystal fractionation and consequential magmatic differentiation, which produced order-of-magnitude variations in trace element concentrations reported in the literature, also produced major element differences in the YTT glass. FeO/Al2O3 ratios vary by about 50 %, which is analytically significant. These variations represent magmatic fractionation involving Fe-bearing phases. We also compared major element concentrations in YTT and Oldest Toba Tuff (OTT) ash samples, to identify potential compositional differences that could constrain the stratigraphic identity of the Morgaon ash (Western India); no differences between the OTT and YTT samples were observed.
Resumo:
Karst aquifers are known for their wide distribution of water transfer velocities. From this observation, a multiple geochemical tracer approach seems to be particularly well suited to provide a significant assessment of groundwater flows, but the choice of adapted tracers is essential. In this study, several common tracers in karst aquifers such as physicochemical parameters, major ions, stable isotopes, and d13C to more specific tracers such as dating tracers – 14C, 3H, 3H–3He, CFC-12, SF6 and 85Kr, and 39Ar – were used, in a fractured karstic carbonated aquifer located in Burgundy (France). The information carried by each tracer and the best sampling strategy are compared on the basis of geochemical monitoring done during several recharge events and over longer time periods (months to years). This study’s results demonstrate that at the seasonal and recharge event time scale, the variability of concentrations is low for most tracers due to the broad spectrum of groundwater mixings. The tracers used traditionally for the study of karst aquifers, i.e., physicochemical parameters and major ions, efficiently describe hydrological processes such as the direct and differed recharge, but require being monitored at short time steps during recharge events to be maximized. From stable isotopes, tritium, and Cl� contents, the proportion of the fast direct recharge by the largest porosity was estimated using a binary mixing model. The use of tracers such as CFC-12, SF6, and 85Kr in karst aquifers provides additional information, notably an estimation of apparent age, but they require good preliminary knowledge of the karst system to interpret the results suitably. The CFC-12 and SF6 methods efficiently determine the apparent age of baseflow, but it is preferable to sample the groundwater during the recharge event. Furthermore, these methods are based on different assumptions such as regional enrichment in atmospheric SF6, excess air, and flow models among others. 85Kr and 39Ar concentrations can potentially provide a more direct estimation of groundwater residence time. Conversely, the 3H–3He method is inefficient in the karst aquifer for dating due to 3He degassing.
Resumo:
The use of complementary and alternative Medicine (CAM) has increased over the past two decades in Europe. Nonetheless, research investigating the evidence to support its use remains limited. The CAMbrella project funded by the European Commission aimed to develop a strategic research agenda starting by systematically evaluating the state of CAM in the EU. CAMbrella involved 9 work packages covering issues such as the definition of CAM; its legal status, provision and use in the EU; and a synthesis of international research perspectives. Based on the work package reports, we developed a strategic and methodologically robust research roadmap based on expert workshops, a systematic Delphi-based process and a final consensus conference. The CAMbrella project suggests six core areas for research to examine the potential contribution of CAM to the health care challenges faced by the EU. These areas include evaluating the prevalence of CAM use in Europe; the EU cititzens’ needs and attitudes regarding CAM; the safety of CAM; the comparative effectiveness of CAM; the effects of meaning and context on CAM outcomes; and different models for integrating CAM into existing health care systems. CAM research should use methods generally accepted in the evaluation of health services, including comparative effectiveness studies and mixed-methods designs. A research strategy is urgently needed, ideally led by a European CAM coordinating research office dedicated to fostering systematic communication between EU governments, the public, charitable and industry funders, researchers and other stakeholders. A European Centre for CAM should also be established to monitor and further a coordinated research strategy with sufficient funds to commission and promote high quality, independent research focusing on the public’s health needs and pan-European collaboration. There is a disparity between highly prevalent use of CAM in Europe and solid knowledge about it. A strategic approach on CAM research should be established to investigate the identified gaps of knowledge and to address upcoming health care challenges.
Resumo:
BACKGROUND Cyclooxygenase-2 (COX-2) is a key enzyme in the synthesis of pro-inflammatory prostaglandins and 5-lipoxygenase (5-LO) is the major source of leukotrienes. Their role in IBD has been demonstrated in humans and animal models, but not in dogs with chronic enteropathies (CCE). HYPOTHESIS COX-2 and 5-LO are upregulated in dogs with CCE. ANIMALS Fifteen healthy control dogs (HCD), 10 dogs with inflammatory bowel disease (IBD), and 15 dogs with food-responsive diarrhea (FRD). METHODS Prospective study. mRNA expression of COX-2, 5-LO, IL-1b, IL-4, IL-6, TNF, IL-10 and TFG-β was evaluated by quantitative real-time RT-PCR in duodenal and colonic biopsies before and after treatment. RESULTS COX-2 expression in the colon was significantly higher in IBD and FRD before and after treatment (all P < .01). IL-1b was higher in FRD in the duodenum after treatment (P = .021). TGF-β expression was significantly higher in the duodenum of HCD compared to FRD/IBD before treatment (both P < .001) and IBD after treatment (P = .012). There were no significant differences among groups and within groups before and after treatment for IL-4, IL-6, TNF, and IL-10. There was a significant correlation between COX-2 and IL-1b in duodenum and colon before treatment in FRD and IBD, whereas 5-LO correlated better with IL-6 and TNF. IL-10 and TGF-β usually were correlated. CONCLUSIONS AND CLINICAL IMPORTANCE COX-2 is upregulated in IBD and FRD, whereas IL-1b and TGF-β seem to be important pro- and anti-inflammatory cytokines, respectively. The use of dual COX/5-LO inhibitors could be an interesting alternative in the treatment of CCE.
Resumo:
N. Bostrom’s simulation argument and two additional assumptions imply that we are likely to live in a computer simulation. The argument is based upon the following assumption about the workings of realistic brain simulations: The hardware of a computer on which a brain simulation is run bears a close analogy to the brain itself. To inquire whether this is so, I analyze how computer simulations trace processes in their targets. I describe simulations as fictional, mathematical, pictorial, and material models. Even though the computer hardware does provide a material model of the target, this does not suffice to underwrite the simulation argument because the ways in which parts of the computer hardware interact during simulations do not resemble the ways in which neurons interact in the brain. Further, there are computer simulations of all kinds of systems, and it would be unreasonable to infer that some computers display consciousness just because they simulate brains rather than, say, galaxies.
Resumo:
Background Complete-pelvis segmentation in antero-posterior pelvic radiographs is required to create a patient-specific three-dimensional pelvis model for surgical planning and postoperative assessment in image-free navigation of total hip arthroplasty. Methods A fast and robust framework for accurately segmenting the complete pelvis is presented, consisting of two consecutive modules. In the first module, a three-stage method was developed to delineate the left hemipelvis based on statistical appearance and shape models. To handle complex pelvic structures, anatomy-specific information processing techniques were employed. As the input to the second module, the delineated left hemi-pelvis was then reflected about an estimated symmetry line of the radiograph to initialize the right hemi-pelvis segmentation. The right hemi-pelvis was segmented by the same three-stage method, Results Two experiments conducted on respectively 143 and 40 AP radiographs demonstrated a mean segmentation accuracy of 1.61±0.68 mm. A clinical study to investigate the postoperative assessment of acetabular cup orientations based on the proposed framework revealed an average accuracy of 1.2°±0.9° and 1.6°±1.4° for anteversion and inclination, respectively. Delineation of each radiograph costs less than one minute. Conclusions Despite further validation needed, the preliminary results implied the underlying clinical applicability of the proposed framework for image-free THA.
Resumo:
Four different literature parameterizations for the formation and evolution of urban secondary organic aerosol (SOA) frequently used in 3-D models are evaluated using a 0-D box model representing the Los Angeles metropolitan region during the California Research at the Nexus of Air Quality and Climate Change (CalNex) 2010 campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle- and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA that formed only from the oxidation of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model–measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate-volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model–measurement agreement for mass concentration. The results from the three parameterizations show large differences (e.g., a factor of 3 in SOA mass) and are not well constrained, underscoring the current uncertainties in this area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the recent parameterizations overpredict urban SOA formation at long photochemical ages (3 days) compared to observations from multiple sites, which can lead to problems in regional and especially global modeling. However, reducing IVOC emissions by one-half in the model to better match recent IVOC measurements improves SOA predictions at these long photochemical ages. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Measured polycyclic aromatic hydrocarbons (naphthalenes) contribute 0.7% of the modeled SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16–27, 35–61, and 19–35 %, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71(+-3) %. The relative contribution of each source is uncertain by almost a factor of 2 depending on the parameterization used. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μgm-3 is also present due to the long-distance transport of highly aged OA, likely with a substantial contribution from regional biogenic SOA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies and which is higher on weekends. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr-1 of SOA globally, or 17% of global SOA, one third of which is likely to be non-fossil.