996 resultados para Conceptual Modeling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the asymmetric and dynamic dependence between financial assets and demonstrate, from the perspective of risk management, the economic significance of dynamic copula models. First, we construct stock and currency portfolios sorted on different characteristics (ex ante beta, coskewness, cokurtosis and order flows), and find substantial evidence of dynamic evolution between the high beta (respectively, coskewness, cokurtosis and order flow) portfolios and the low beta (coskewness, cokurtosis and order flow) portfolios. Second, using three different dependence measures, we show the presence of asymmetric dependence between these characteristic-sorted portfolios. Third, we use a dynamic copula framework based on Creal et al. (2013) and Patton (2012) to forecast the portfolio Value-at-Risk of long-short (high minus low) equity and FX portfolios. We use several widely used univariate and multivariate VaR models for the purpose of comparison. Backtesting our methodology, we find that the asymmetric dynamic copula models provide more accurate forecasts, in general, and, in particular, perform much better during the recent financial crises, indicating the economic significance of incorporating dynamic and asymmetric dependence in risk management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the dynamic and asymmetric dependence structure between equity portfolios from the US and UK. We demonstrate the statistical significance of dynamic asymmetric copula models in modelling and forecasting market risk. First, we construct “high-minus-low" equity portfolios sorted on beta, coskewness, and cokurtosis. We find substantial evidence of dynamic and asymmetric dependence between characteristic-sorted portfolios. Second, we consider a dynamic asymmetric copula model by combining the generalized hyperbolic skewed t copula with the generalized autoregressive score (GAS) model to capture both the multivariate non-normality and the dynamic and asymmetric dependence between equity portfolios. We demonstrate its usefulness by evaluating the forecasting performance of Value-at-Risk and Expected Shortfall for the high-minus-low portfolios. From back-testing, e find consistent and robust evidence that our dynamic asymmetric copula model provides the most accurate forecasts, indicating the importance of incorporating the dynamic and asymmetric dependence structure in risk management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discuses current strategies for the development of AIDS vaccines wich allow immunzation to disturb the natural course of HIV at different detailed stages of its life cycle. Mathematical models describing the main biological phenomena (i.e. virus and vaccine induced T4 cell growth; virus and vaccine induced activation latently infected T4 cells; incremental changes immune response as infection progress; antibody dependent enhancement and neutralization of infection) and allowing for different vaccination strategies serve as a backgroud for computer simulations. The mathematical models reproduce updated information on the behavior of immune cells, antibody concentrations and free viruses. The results point to some controversial outcomes of an AIDS vaccine such as an early increase in virus concentration among vaccinated when compared to nonvaccinated individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational modeling has become a widely used tool for unraveling the mechanisms of higher level cooperative cell behavior during vascular morphogenesis. However, experimenting with published simulation models or adding new assumptions to those models can be daunting for novice and even for experienced computational scientists. Here, we present a step-by-step, practical tutorial for building cell-based simulations of vascular morphogenesis using the Tissue Simulation Toolkit (TST). The TST is a freely available, open-source C++ library for developing simulations with the two-dimensional cellular Potts model, a stochastic, agent-based framework to simulate collective cell behavior. We will show the basic use of the TST to simulate and experiment with published simulations of vascular network formation. Then, we will present step-by-step instructions and explanations for building a recent simulation model of tumor angiogenesis. Demonstrated mechanisms include cell-cell adhesion, chemotaxis, cell elongation, haptotaxis, and haptokinesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Åknes is an active complex large rockslide of approximately 30?40 Mm3 located within the Proterozoic gneisses of western Norway. The observed surface displacements indicate that this rockslide is divided into several blocks moving in different directions at velocities of between 3 and 10 cm year?1. Because of regional safety issues and economic interests this rockslide has been extensively monitored since 2004. The understanding of the deformation mechanism is crucial for the implementation of a viable monitoring system. Detailed field investigations and the analysis of a digital elevation model (DEM) indicate that the movements and the block geometry are controlled by the main schistosity (S1) in gneisses, folds, joints and regional faults. Such complex slope deformations use pre-existing structures, but also result in new failure surfaces and deformation zones, like preferential rupture in fold-hinge zones. Our interpretation provides a consistent conceptual three-dimensional (3D) model for the movements measured by various methods that is crucial for numerical stability modelling. In addition, this reinterpretation of the morphology confirms that in the past several rockslides occurred from the Åknes slope. They may be related to scars propagating along the vertical foliation in folds hinges. Finally, a model of the evolution of the Åknes slope is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2. We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3. Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4. Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Species delimitation has been invigorated as a discipline in systematics by an influx of new character sets, analytical methods, and conceptual advances. We use genetic data from 68 markers, combined with distributional, bioclimatic, and coloration information, to hypothesize boundaries of evolutionarily independent lineages (species) within the widespread and highly variable nominal fire ant species Solenopsis saevissima, a member of a species group containing invasive pests as well as species that are models for ecological and evolutionary research. Our integrated approach uses diverse methods of analysis to sequentially test whether populations meet specific operational criteria (contingent properties) for candidacy as morphologically cryptic species, including genetic clustering, monophyly, reproductive isolation, and occupation of distinctive niche space. We hypothesize that nominal S. saevissima comprises at least 4-6 previously unrecognized species, including several pairs whose parapatric distributions implicate the development of intrinsic premating or postmating barriers to gene flow. Our genetic data further suggest that regional genetic differentiation in S. saevissima has been influenced by hybridization with other nominal species occurring in sympatry or parapatry, including the quite distantly related Solenopsis geminata. The results of this study illustrate the importance of employing different classes of genetic data (coding and noncoding regions and nuclear and mitochondrial DNA [mtDNA] markers), different methods of genetic data analysis (tree-based and non-tree based methods), and different sources of data (genetic, morphological, and ecological data) to explicitly test various operational criteria for species boundaries in clades of recently diverged lineages, while warning against over reliance on any single data type (e.g., mtDNA sequence variation) when drawing inferences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the properties of the well known Replicator Dynamics when applied to a finitely repeated version of the Prisoners' Dilemma game. We characterize the behavior of such dynamics under strongly simplifying assumptions (i.e. only 3 strategies are available) and show that the basin of attraction of defection shrinks as the number of repetitions increases. After discussing the difficulties involved in trying to relax the 'strongly simplifying assumptions' above, we approach the same model by means of simulations based on genetic algorithms. The resulting simulations describe a behavior of the system very close to the one predicted by the replicator dynamics without imposing any of the assumptions of the mathematical model. Our main conclusion is that mathematical and computational models are good complements for research in social sciences. Indeed, while computational models are extremely useful to extend the scope of the analysis to complex scenarios hard to analyze mathematically, formal models can be useful to verify and to explain the outcomes of computational models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim: The relative effectiveness of different methods of prevention of HIV transmission is a subject of debate that is renewed with the integration of each new method. The relative weight of values and evidence in decision-making is not always clearly defined. Debate is often confused, as the proponents of different approaches address the issue at different levels of implementation. This paper defines and delineates the successive levels of analysis of effectiveness, and proposes a conceptual framework to clarify debate. Method / Issue: Initially inspired from work on contraceptive effectiveness, a first version of the conceptual framework was published in 1993 with definition of the Condom Effectiveness Matrix (Spencer, 1993). The framework has since integrated and further developed thinking around distinctions made between efficacy and effectiveness and has been applied to HIV prevention in general. Three levels are defined: theoretical effectiveness (ThE), use-effectiveness (UseE) and population use-effectiveness (PopUseE). For example, abstinence and faithfulness, as proposed in the ABC strategy, have relatively high theoretical effectiveness but relatively low effectiveness at subsequent levels of implementation. The reverse is true of circumcision. Each level is associated with specific forms of scientific enquiry and associated research questions: basic and clinical sciences with ThE; clinical and social sciences with UseE; epidemiology and social, economic and political sciences with PopUseE. Similarly, the focus of investigation moves from biological organisms, to the individual at the physiological and then psychological, social and ecological level, and finally takes as perspective populations and societies as a whole. The framework may be applied to analyse issues on any approach. Hence, regarding consideration of HIV treatment as a means of prevention, examples of issues at each level would be: ThE: achieving adequate viral suppression and non-transmission to partners; UseE: facility and degree of adherence to treatment and medical follow-up; PopUseE: perceived validity of strategy, feasibility of achieving adequate population coverage. Discussion: Use of the framework clarifies the questions that need to be addressed at all levels in order to improve effectiveness. Furthermore, the interconnectedness and complementary nature of research from the different scientific disciplines and the relative contribution of each become apparent. The proposed framework could bring greater rationality to the prevention effectiveness debate and facilitate communication between stakeholders.