118 resultados para Theory of economic-mathematical models
Resumo:
In this paper we provide a connection between the geometrical properties of the attractor of a chaotic dynamical system and the distribution of extreme values. We show that the extremes of so-called physical observables are distributed according to the classical generalised Pareto distribution and derive explicit expressions for the scaling and the shape parameter. In particular, we derive that the shape parameter does not depend on the cho- sen observables, but only on the partial dimensions of the invariant measure on the stable, unstable, and neutral manifolds. The shape parameter is negative and is close to zero when high-dimensional systems are considered. This result agrees with what was derived recently using the generalized extreme value approach. Combining the results obtained using such physical observables and the properties of the extremes of distance observables, it is possible to derive estimates of the partial dimensions of the attractor along the stable and the unstable directions of the flow. Moreover, by writing the shape parameter in terms of moments of the extremes of the considered observable and by using linear response theory, we relate the sensitivity to perturbations of the shape parameter to the sensitivity of the moments, of the partial dimensions, and of the Kaplan–Yorke dimension of the attractor. Preliminary numer- ical investigations provide encouraging results on the applicability of the theory presented here. The results presented here do not apply for all combinations of Axiom A systems and observables, but the breakdown seems to be related to very special geometrical configurations.
Resumo:
We evaluate the ability of process based models to reproduce observed global mean sea-level change. When the models are forced by changes in natural and anthropogenic radiative forcing of the climate system and anthropogenic changes in land-water storage, the average of the modelled sea-level change for the periods 1900–2010, 1961–2010 and 1990–2010 is about 80%, 85% and 90% of the observed rise. The modelled rate of rise is over 1 mm yr−1 prior to 1950, decreases to less than 0.5 mm yr−1 in the 1960s, and increases to 3 mm yr−1 by 2000. When observed regional climate changes are used to drive a glacier model and an allowance is included for an ongoing adjustment of the ice sheets, the modelled sea-level rise is about 2 mm yr−1 prior to 1950, similar to the observations. The model results encompass the observed rise and the model average is within 20% of the observations, about 10% when the observed ice sheet contributions since 1993 are added, increasing confidence in future projections for the 21st century. The increased rate of rise since 1990 is not part of a natural cycle but a direct response to increased radiative forcing (both anthropogenic and natural), which will continue to grow with ongoing greenhouse gas emissions
Resumo:
We compare five general circulation models (GCMs) which have been recently used to study hot extrasolar planet atmospheres (BOB, CAM, IGCM, MITgcm, and PEQMOD), under three test cases useful for assessing model convergence and accuracy. Such a broad, detailed intercomparison has not been performed thus far for extrasolar planets study. The models considered all solve the traditional primitive equations, but employ di↵erent numerical algorithms or grids (e.g., pseudospectral and finite volume, with the latter separately in longitude-latitude and ‘cubed-sphere’ grids). The test cases are chosen to cleanly address specific aspects of the behaviors typically reported in hot extrasolar planet simulations: 1) steady-state, 2) nonlinearly evolving baroclinic wave, and 3) response to fast timescale thermal relaxation. When initialized with a steady jet, all models maintain the steadiness, as they should—except MITgcm in cubed-sphere grid. A very good agreement is obtained for a baroclinic wave evolving from an initial instability in pseudospectral models (only). However, exact numerical convergence is still not achieved across the pseudospectral models: amplitudes and phases are observably di↵erent. When subject to a typical ‘hot-Jupiter’-like forcing, all five models show quantitatively di↵erent behavior—although qualitatively similar, time-variable, quadrupole-dominated flows are produced. Hence, as have been advocated in several past studies, specific quantitative predictions (such as the location of large vortices and hot regions) by GCMs should be viewed with caution. Overall, in the tests considered here, pseudospectral models in pressure coordinate (PEBOB and PEQMOD) perform the best and MITgcm in cubed-sphere grid performs the worst.
Resumo:
After the “European” experience of BSE and further food safety crises consumer trust is playing an increasingly important role in political and marketing decision making. This also relates to the area of consumer acceptance of GM food. This paper integrates consumer trust with the theory of planned behavior and a stated choice model to gain a more complete picture of consumer decision making. Preliminary results indicate that when GM products offer practical benefits to consumers acceptance may increase considerably. Furthermore, both trust and perceived benefits contribute significantly to explaining the level of acceptance.
Resumo:
Second language acquisition researchers often face particular challenges when attempting to generalize study findings to the wider learner population. For example, language learners constitute a heterogeneous group, and it is not always clear how a study’s findings may generalize to other individuals who may differ in terms of language background and proficiency, among many other factors. In this paper, we provide an overview of how mixed-effects models can be used to help overcome these and other issues in the field of second language acquisition. We provide an overview of the benefits of mixed-effects models and a practical example of how mixed-effects analyses can be conducted. Mixed-effects models provide second language researchers with a powerful statistical tool in the analysis of a variety of different types of data.
Resumo:
The human gut is a complex ecosystem occupied by a diverse microbial community. Modulation of this microbiota impacts health and disease. The definitive way to investigate the impact of dietary intervention on the gut microbiota is a human trial. However, human trials are expensive and can be difficult to control; thus, initial screening is desirable. Utilization of a range of in vitro and in vivo models means that useful information can be gathered prior to the necessity for human intervention. This review discusses the benefits and limitations of these approaches.
Resumo:
This paper investigates the feasibility of using approximate Bayesian computation (ABC) to calibrate and evaluate complex individual-based models (IBMs). As ABC evolves, various versions are emerging, but here we only explore the most accessible version, rejection-ABC. Rejection-ABC involves running models a large number of times, with parameters drawn randomly from their prior distributions, and then retaining the simulations closest to the observations. Although well-established in some fields, whether ABC will work with ecological IBMs is still uncertain. Rejection-ABC was applied to an existing 14-parameter earthworm energy budget IBM for which the available data consist of body mass growth and cocoon production in four experiments. ABC was able to narrow the posterior distributions of seven parameters, estimating credible intervals for each. ABC’s accepted values produced slightly better fits than literature values do. The accuracy of the analysis was assessed using cross-validation and coverage, currently the best available tests. Of the seven unnarrowed parameters, ABC revealed that three were correlated with other parameters, while the remaining four were found to be not estimable given the data available. It is often desirable to compare models to see whether all component modules are necessary. Here we used ABC model selection to compare the full model with a simplified version which removed the earthworm’s movement and much of the energy budget. We are able to show that inclusion of the energy budget is necessary for a good fit to the data. We show how our methodology can inform future modelling cycles, and briefly discuss how more advanced versions of ABC may be applicable to IBMs. We conclude that ABC has the potential to represent uncertainty in model structure, parameters and predictions, and to embed the often complex process of optimizing an IBM’s structure and parameters within an established statistical framework, thereby making the process more transparent and objective.
Resumo:
We develop a transaction cost economics theory of the family firm, building upon the concepts of family-based asset specificity, bounded rationality, and bounded reliability. We argue that the prosperity and survival of family firms depend on the absence of a dysfunctional bifurcation bias. The bifurcation bias is an expression of bounded reliability, reflected in the de facto asymmetric treatment of family vs. nonfamily assets (especially human assets). We propose that absence of bifurcation bias is critical to fostering reliability in family business functioning. Our study ends the unproductive divide between the agency and stewardship perspectives of the family firm, which offer conflicting accounts of this firm type's functioning. We show that the predictions of the agency and stewardship perspectives can be usefully reconciled when focusing on how family firms address the bifurcation bias or fail to do so.
Resumo:
In a recent paper of Feng and Sidorov they show that for β∈(1,(1+5√)/2) the set of β-expansions grows exponentially for every x∈(0,1/(β−1)). In this paper we study this growth rate further. We also consider the set of β-expansions from a dimension theory perspective.
Resumo:
We present an account of semantic representation that focuses on distinct types of information from which word meanings can be learned. In particular, we argue that there are at least two major types of information from which we learn word meanings. The first is what we call experiential information. This is data derived both from our sensory-motor interactions with the outside world, as well as from our experience of own inner states, particularly our emotions. The second type of information is language-based. In particular, it is derived from the general linguistic context in which words appear. The paper spells out this proposal, summarizes research supporting this view and presents new predictions emerging from this framework.
Resumo:
Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models – the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.
Resumo:
In this paper we prove some connections between the growth of a function and its Mellin transform and apply these to study an explicit example in the theory of Beurling primes.