938 resultados para implied volatility function models
Resumo:
Background The relevance of mitochondrial dysfunction as to pathogenesis of multiple organ dysfunction and failure in sepsis is controversial. This focused review evaluates the evidence for impaired mitochondrial function in sepsis. Design Review of original studies in experimental sepsis animal models and clinical studies on mitochondrial function in sepsis. In vitro studies solely on cells and tissues were excluded. PubMed was searched for articles published between 1964 and July 2012. Results Data from animal experiments (rodents and pigs) and from clinical studies of septic critically ill patients and human volunteers were included. A clear pattern of sepsis-related changes in mitochondrial function is missing in all species. The wide range of sepsis models, length of experiments, presence or absence of fluid resuscitation and methods to measure mitochondrial function may contribute to the contradictory findings. A consistent finding was the high variability of mitochondrial function also in control conditions and between organs. Conclusion Mitochondrial function in sepsis is highly variable, organ specific and changes over the course of sepsis. Patients who will die from sepsis may be more affected than survivors. Nevertheless, the current data from mostly young and otherwise healthy animals does not support the view that mitochondrial dysfunction is the general denominator for multiple organ failure in severe sepsis and septic shock. Whether this is true if underlying comorbidities are present, especially in older patients, should be addressed in further studies.
Resumo:
Since European settlement, there has been a dramatic increase in the density, cover and distribution of woody plants in former grassland and open woodland. There is a widespread belief that shrub encroachment is synonymous with declines in ecosystem functions, and often it is associated with landscape degradation or desertification. Indeed, this decline in ecosystem functioning is considered to be driven largely by the presence of the shrubs themselves. This prevailing paradigm has been the basis for an extensive program of shrub removal, based on the view that it is necessary to reinstate the original open woodland or grassland structure from which shrublands are thought to have been derived. We review existing scientific evidence, particularly focussed on eastern Australia, to question the notion that shrub encroachment leads to declines in ecosystem functions. We then summarise this scientific evidence into two conceptual models aimed at optimising landscape management to maximise the services provided by shrub-encroached areas. The first model seeks to reconcile the apparent conflicts between the patch- and landscape-level effects of shrubs. The second model identifies the ecosystem services derived from different stages of shrub encroachment. We also examined six ecosystem services provided by shrublands (biodiversity, soil C, hydrology, nutrient provision, grass growth and soil fertility) by using published and unpublished data. We demonstrated the following: (1) shrub effects on ecosystems are strongly scale-, species- and environment-dependent and, therefore, no standardised management should be applied to every case; (2) overgrazing dampens the generally positive effect of shrubs, leading to the misleading relationship between encroachment and degradation; (3) woody encroachment per se does not hinder any of the functions or services described above, rather it enhances many of them; (4) no single shrub-encroachment state (including grasslands without shrubs) will maximise all services; rather, the provision of ecosystem goods and services by shrublands requires a mixture of different states; and (5) there has been little rigorous assessment of the long-term effectiveness of removal and no evidence that this improves land condition in most cases. Our review provides the basis for an improved, scientifically based understanding and management of shrublands, so as to balance the competing goals of providing functional habitats, maintaining soil processes and sustaining pastoral livelihoods.
Resumo:
Mathematical models of disease progression predict disease outcomes and are useful epidemiological tools for planners and evaluators of health interventions. The R package gems is a tool that simulates disease progression in patients and predicts the effect of different interventions on patient outcome. Disease progression is represented by a series of events (e.g., diagnosis, treatment and death), displayed in a directed acyclic graph. The vertices correspond to disease states and the directed edges represent events. The package gems allows simulations based on a generalized multistate model that can be described by a directed acyclic graph with continuous transition-specific hazard functions. The user can specify an arbitrary hazard function and its parameters. The model includes parameter uncertainty, does not need to be a Markov model, and may take the history of previous events into account. Applications are not limited to the medical field and extend to other areas where multistate simulation is of interest. We provide a technical explanation of the multistate models used by gems, explain the functions of gems and their arguments, and show a sample application.
Resumo:
Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.
Resumo:
The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review differences of eLUC quantification methods and apply an Earth System Model (ESM) of Intermediate Complexity to quantify them. We find that the magnitude of effects due to merely conceptual differences between ESM and offline vegetation model-based quantifications is ~ 20 % for today. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate secondary component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.
Resumo:
67P/Churyumov-Gerasimenko (67P) is a Jupiter-family comet and the object of investigation of the European Space Agency mission Rosetta. This report presents the first full 3D simulation results of 67P’s neutral gas coma. In this study we include results from a direct simulation Monte Carlo method, a hydrodynamic code, and a purely geometric calculation which computes the total illuminated surface area on the nucleus. All models include the triangulated 3D shape model of 67P as well as realistic illumination and shadowing conditions. The basic concept is the assumption that these illumination conditions on the nucleus are the main driver for the gas activity of the comet. As a consequence, the total production rate of 67P varies as a function of solar insolation. The best agreement between the model and the data is achieved when gas fluxes on the night side are in the range of 7% to 10% of the maximum flux, accounting for contributions from the most volatile components. To validate the output of our numerical simulations we compare the results of all three models to in situ gas number density measurements from the ROSINA COPS instrument. We are able to reproduce the overall features of these local neutral number density measurements of ROSINA COPS for the time period between early August 2014 and January 1 2015 with all three models. Some details in the measurements are not reproduced and warrant further investigation and refinement of the models. However, the overall assumption that illumination conditions on the nucleus are at least an important driver of the gas activity is validated by the models. According to our simulation results we find the total production rate of 67P to be constant between August and November 2014 with a value of about 1 × 10²⁶ molecules s⁻¹.
Resumo:
The treatment of infectious diseases affecting osseointegrated implants in function has become a demanding issue in implant dentistry. Since the early 1990s, preclinical data from animal studies have provided important insights into the etiology, pathogenesis and therapy of peri-implant diseases. Established lesions in animals have shown many features in common with those found in human biopsy material. The current review focuses on animal studies, employing different models to induce peri-implant mucositis and peri-implantitis.
Resumo:
The ultimate goals of periodontal therapy remain the complete regeneration of those periodontal tissues lost to the destructive inflammatory-immune response, or to trauma, with tissues that possess the same structure and function, and the re-establishment of a sustainable health-promoting biofilm from one characterized by dysbiosis. This volume of Periodontology 2000 discusses the multiple facets of a transition from therapeutic empiricism during the late 1960s, toward regenerative therapies, which is founded on a clearer understanding of the biophysiology of normal structure and function. This introductory article provides an overview on the requirements of appropriate in vitro laboratory models (e.g. cell culture), of preclinical (i.e. animal) models and of human studies for periodontal wound and bone repair. Laboratory studies may provide valuable fundamental insights into basic mechanisms involved in wound repair and regeneration but also suffer from a unidimensional and simplistic approach that does not account for the complexities of the in vivo situation, in which multiple cell types and interactions all contribute to definitive outcomes. Therefore, such laboratory studies require validatory research, employing preclinical models specifically designed to demonstrate proof-of-concept efficacy, preliminary safety and adaptation to human disease scenarios. Small animal models provide the most economic and logistically feasible preliminary approaches but the outcomes do not necessarily translate to larger animal or human models. The advantages and limitations of all periodontal-regeneration models need to be carefully considered when planning investigations to ensure that the optimal design is adopted to answer the specific research question posed. Future challenges lie in the areas of stem cell research, scaffold designs, cell delivery and choice of growth factors, along with research to ensure appropriate gingival coverage in order to prevent gingival recession during the healing phase.
Resumo:
Many of the clinical manifestations of hyperthyroidism are due to the ability of thyroid hormones to alter myocardial contractility and cardiovascular hemodynamics, leading to cardiovascular impairment. In contrast, recent studies highlight also the potential beneficial effects of thyroid hormone administration for clinical or preclinical treatment of different diseases such as atherosclerosis, obesity and diabetes or as a new therapeutic approach in demyelinating disorders. In these contexts and in the view of developing thyroid hormone-based therapeutic strategies, it is, however, important to analyze undesirable secondary effects on the heart. Animal models of experimentally induced hyperthyroidism therefore represent important tools for investigating and monitoring changes of cardiac function. In our present study we use high-field cardiac MRI to monitor and follow-up longitudinally the effects of prolonged thyroid hormone (triiodothyronine) administration focusing on murine left ventricular function. Using a 9.4 T small horizontal bore animal scanner, cinematographic MRI was used to analyze changes in ejection fraction, wall thickening, systolic index and fractional shortening. Cardiac MRI investigations were performed after sustained cycles of triiodothyronine administration and treatment arrest in adolescent (8 week old) and adult (24 week old) female C57Bl/6 N mice. Triiodothyronine supplementation of 3 weeks led to an impairment of cardiac performance with a decline in ejection fraction, wall thickening, systolic index and fractional shortening in both age groups but with a higher extent in the group of adolescent mice. However, after a hormonal treatment cessation of 3 weeks, only young mice are able to partly restore cardiac performance in contrast to adult mice lacking this recovery potential and therefore indicating a presence of chronically developed heart pathology.
Resumo:
CONTEXT Hyperthyroidism is an established risk factor for atrial fibrillation (AF), but information concerning the association with variations within the normal range of thyroid function and subgroups at risk is lacking. OBJECTIVE This study aimed to investigate the association between normal thyroid function and AF prospectively and explore potential differential risk patterns. DESIGN, SETTING, AND PARTICIPANTS From the Rotterdam Study we included 9166 participants ≥ 45 y with TSH and/or free T4 (FT4) measurements and AF assessment (1997-2012 median followup, 6.8 y), with 399 prevalent and 403 incident AF cases. MAIN OUTCOME MEASURES Outcome measures were 3-fold: 1) hazard ratios (HRs) for the risk of incident AF by Cox proportional-hazards models, 2) 10-year absolute risks taking competing risk of death into account, and 3) discrimination ability of adding FT4 to the CHARGE-AF simple model, an established prediction model for AF. RESULTS Higher FT4 levels were associated with higher risks of AF (HR 1.63, 95% confidence interval, 1.19-2.22), when comparing those in the highest quartile to those in lowest quartile. Absolute 10-year risks increased with higher FT4 in participants ≤ 65 y from 1-9% and from 6-12% in subjects ≥ 65 y. Discrimination of the prediction model improved when adding FT4 to the simple model (c-statistic, 0.722 vs 0.729; P = .039). TSH levels were not associated with AF. CONCLUSIONS There is an increased risk of AF with higher FT4 levels within the normal range, especially in younger subjects. Adding FT4 to the simple model slightly improved discrimination of risk prediction.
Resumo:
We present a novel surrogate model-based global optimization framework allowing a large number of function evaluations. The method, called SpLEGO, is based on a multi-scale expected improvement (EI) framework relying on both sparse and local Gaussian process (GP) models. First, a bi-objective approach relying on a global sparse GP model is used to determine potential next sampling regions. Local GP models are then constructed within each selected region. The method subsequently employs the standard expected improvement criterion to deal with the exploration-exploitation trade-off within selected local models, leading to a decision on where to perform the next function evaluation(s). The potential of our approach is demonstrated using the so-called Sparse Pseudo-input GP as a global model. The algorithm is tested on four benchmark problems, whose number of starting points ranges from 102 to 104. Our results show that SpLEGO is effective and capable of solving problems with large number of starting points, and it even provides significant advantages when compared with state-of-the-art EI algorithms.
Resumo:
OBJECTIVES Chewing efficiency may be evaluated using cohesive specimen, especially in elderly or dysphagic patients. The aim of this study was to evaluate three two-coloured chewing gums for a colour-mixing ability test and to validate a new purpose built software (ViewGum©). METHODS Dentate participants (dentate-group) and edentulous patients with mandibular two-implant overdentures (IOD-group) were recruited. First, the dentate-group chewed three different types of two-coloured gum (gum1-gum3) for 5, 10, 20, 30 and 50 chewing cycles. Subsequently the number of chewing cycles with the highest intra- and inter-rater agreement was determined visually by applying a scale (SA) and opto-electronically (ViewGum©, Bland-Altman analysis). The ViewGum© software determines semi-automatically the variance of hue (VOH); inadequate mixing presents with larger VOH than complete mixing. Secondly, the dentate-group and the IOD-group were compared. RESULTS The dentate-group comprised 20 participants (10 female, 30.3±6.7 years); the IOD-group 15 participants (10 female, 74.6±8.3 years). Intra-rater and inter-rater agreement (SA) was very high at 20 chewing cycles (95.00-98.75%). Gums 1-3 showed different colour-mixing characteristics as a function of chewing cycles, gum1 showed a logarithmic association; gum2 and gum3 demonstrated more linear behaviours. However, the number of chewing cycles could be predicted in all specimens from VOH (all p<0.0001, mixed linear regression models). Both analyses proved discriminative to the dental state. CONCLUSION ViewGum© proved to be a reliable and discriminative tool to opto-electronically assess chewing efficiency, given an elastic specimen is chewed for 20 cycles and could be recommended for the evaluation of chewing efficiency in a clinical and research setting. CLINICAL SIGNIFICANCE Chewing is a complex function of the oro-facial structures and the central nervous system. The application of the proposed assessments of the chewing function in geriatrics or special care dentistry could help visualising oro-functional or dental comorbidities in dysphagic patients or those suffering from protein-energy malnutrition.
Resumo:
We present recent improvements of the modeling of the disruption of strength dominated bodies using the Smooth Particle Hydrodynamics (SPH) technique. The improvements include an updated strength model and a friction model, which are successfully tested by a comparison with laboratory experiments. In the modeling of catastrophic disruptions of asteroids, a comparison between old and new strength models shows no significant deviation in the case of targets which are initially non-porous, fully intact and have a homogeneous structure (such as the targets used in the study by Benz and Asphaug, 1999). However, for many cases (e.g. initially partly or fully damaged targets and rubble-pile structures) we find that it is crucial that friction is taken into account and the material has a pressure dependent shear strength. Our investigations of the catastrophic disruption threshold (27, as a function of target properties and target sizes up to a few 100 km show that a fully damaged target modeled without friction has a Q(D)*:, which is significantly (5-10 times) smaller than in the case where friction is included. When the effect of the energy dissipation due to compaction (pore crushing) is taken into account as well, the targets become even stronger (Q(D)*; is increased by a factor of 2-3). On the other hand, cohesion is found to have an negligible effect at large scales and is only important at scales less than or similar to 1 km. Our results show the relative effects of strength, friction and porosity on the outcome of collisions among small (less than or similar to 1000 km) bodies. These results will be used in a future study to improve existing scaling laws for the outcome of collisions (e.g. Leinhardt and Stewart, 2012). (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Despite the strong increase in observational data on extrasolar planets, the processes that led to the formation of these planets are still not well understood. However, thanks to the high number of extrasolar planets that have been discovered, it is now possible to look at the planets as a population that puts statistical constraints on theoretical formation models. A method that uses these constraints is planetary population synthesis where synthetic planetary populations are generated and compared to the actual population. The key element of the population synthesis method is a global model of planet formation and evolution. These models directly predict observable planetary properties based on properties of the natal protoplanetary disc, linking two important classes of astrophysical objects. To do so, global models build on the simplified results of many specialized models that address one specific physical mechanism. We thoroughly review the physics of the sub-models included in global formation models. The sub-models can be classified as models describing the protoplanetary disc (of gas and solids), those that describe one (proto)planet (its solid core, gaseous envelope and atmosphere), and finally those that describe the interactions (orbital migration and N-body interaction). We compare the approaches taken in different global models, discuss the links between specialized and global models, and identify physical processes that require improved descriptions in future work. We then shortly address important results of planetary population synthesis like the planetary mass function or the mass-radius relationship. With these statistical results, the global effects of physical mechanisms occurring during planet formation and evolution become apparent, and specialized models describing them can be put to the observational test. Owing to their nature as meta models, global models depend on the results of specialized models, and therefore on the development of the field of planet formation theory as a whole. Because there are important uncertainties in this theory, it is likely that the global models will in future undergo significant modifications. Despite these limitations, global models can already now yield many testable predictions. With future global models addressing the geophysical characteristics of the synthetic planets, it should eventually become possible to make predictions about the habitability of planets based on their formation and evolution.
Resumo:
We propose a way to incorporate NTBs for the four workhorse models of the modern trade literature in computable general equilibrium models (CGEs). CGE models feature intermediate linkages and thus allow us to study global value chains (GVCs). We show that the Ethier-Krugman monopolistic competition model, the Melitz firm heterogeneity model and the Eaton and Kortum model can be defined as an Armington model with generalized marginal costs, generalized trade costs and a demand externality. As already known in the literature in both the Ethier-Krugman model and the Melitz model generalized marginal costs are a function of the amount of factor input bundles. In the Melitz model generalized marginal costs are also a function of the price of the factor input bundles. Lower factor prices raise the number of firms that can enter the market profitably (extensive margin), reducing generalized marginal costs of a representative firm. For the same reason the Melitz model features a demand externality: in a larger market more firms can enter. We implement the different models in a CGE setting with multiple sectors, intermediate linkages, non-homothetic preferences and detailed data on trade costs. We find the largest welfare effects from trade cost reductions in the Melitz model. We also employ the Melitz model to mimic changes in Non tariff Barriers (NTBs) with a fixed cost-character by analysing the effect of changes in fixed trade costs. While we work here with a model calibrated to the GTAP database, the methods developed can also be applied to CGE models based on the WIOD database.