900 resultados para new keynesian models
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
The present work describes a new tool that helps bidders improve their competitive bidding strategies. This new tool consists of an easy-to-use graphical tool that allows the use of more complex decision analysis tools in the field of Competitive Bidding. The graphic tool described here tries to move away from previous bidding models which attempt to describe the result of an auction or a tender process by means of studying each possible bidder with probability density functions. As an illustration, the tool is applied to three practical cases. Theoretical and practical conclusions on the great potential breadth of application of the tool are also presented.
Resumo:
The contraction of a species’ distribution range, which results from the extirpation of local populations, generally precedes its extinction. Therefore, understanding drivers of range contraction is important for conservation and management. Although there are many processes that can potentially lead to local extirpation and range contraction, three main null models have been proposed: demographic, contagion, and refuge. The first two models postulate that the probability of local extirpation for a given area depends on its relative position within the range; but these models generate distinct spatial predictions because they assume either a ubiquitous (demographic) or a clinal (contagion) distribution of threats. The third model (refuge) postulates that extirpations are determined by the intensity of human impacts, leading to heterogeneous spatial predictions potentially compatible with those made by the other two null models. A few previous studies have explored the generality of some of these null models, but we present here the first comprehensive evaluation of all three models. Using descriptive indices and regression analyses we contrast the predictions made by each of the null models using empirical spatial data describing range contraction in 386 terrestrial vertebrates (mammals, birds, amphibians, and reptiles) distributed across the World. Observed contraction patterns do not consistently conform to the predictions of any of the three models, suggesting that these may not be adequate null models to evaluate range contraction dynamics among terrestrial vertebrates. Instead, our results support alternative null models that account for both relative position and intensity of human impacts. These new models provide a better multifactorial baseline to describe range contraction patterns in vertebrates. This general baseline can be used to explore how additional factors influence contraction, and ultimately extinction for particular areas or species as well as to predict future changes in light of current and new threats.
Resumo:
Human Body Thermoregulation Models have been widely used in the field of human physiology or thermal comfort studies. However there are few studies on the evaluation method for these models. This paper summarises the existing evaluation methods and critically analyses the flaws. Based on that, a method for the evaluating the accuracy of the Human Body Thermoregulation models is proposed. The new evaluation method contributes to the development of Human Body Thermoregulation models and validates their accuracy both statistically and empirically. The accuracy of different models can be compared by the new method. Furthermore, the new method is not only suitable for the evaluation of Human Body Thermoregulation Models, but also can be theoretically applied to the evaluation of the accuracy of the population-based models in other research fields.
Resumo:
Inspired by the commercial desires of global brands and retailers to access the lucrative green consumer market, carbon is increasingly being counted and made knowable at the mundane sites of everyday production and consumption, from the carbon footprint of a plastic kitchen fork to that of an online bank account. Despite the challenges of counting and making commensurable the global warming impact of a myriad of biophysical and societal activities, this desire to communicate a product or service's carbon footprint has sparked complicated carbon calculative practices and enrolled actors at literally every node of multi-scaled and vastly complex global supply chains. Against this landscape, this paper critically analyzes the counting practices that create the ‘e’ in ‘CO2e’. It is shown that, central to these practices are a series of tools, models and databases which, in building upon previous work (Eden, 2012 and Star and Griesemer, 1989) we conceptualize here as ‘boundary objects’. By enrolling everyday actors from farmers to consumers, these objects abstract and stabilize greenhouse gas emissions from their messy material and social contexts into units of CO2e which can then be translated along a product's supply chain, thereby establishing a new currency of ‘everyday supply chain carbon’. However, in making all greenhouse gas-related practices commensurable and in enrolling and stabilizing the transfer of information between multiple actors these objects oversee a process of simplification reliant upon, and subject to, a multiplicity of approximations, assumptions, errors, discrepancies and/or omissions. Further the outcomes of these tools are subject to the politicized and commercial agendas of the worlds they attempt to link, with each boundary actor inscribing different meanings to a product's carbon footprint in accordance with their specific subjectivities, commercial desires and epistemic framings. It is therefore shown that how a boundary object transforms greenhouse gas emissions into units of CO2e, is the outcome of distinct ideologies regarding ‘what’ a product's carbon footprint is and how it should be made legible. These politicized decisions, in turn, inform specific reduction activities and ultimately advance distinct, specific and increasingly durable transition pathways to a low carbon society.
Resumo:
The aim of this study was to assess and improve the accuracy of biotransfer models for the organic pollutants (PCBs, PCDD/Fs, PBDEs, PFCAs, and pesticides) into cow’s milk and beef used in human exposure assessment. Metabolic rate in cattle is known as a key parameter for this biotransfer, however few experimental data and no simulation methods are currently available. In this research, metabolic rate was estimated using existing QSAR biodegradation models of microorganisms (BioWIN) and fish (EPI-HL and IFS-HL). This simulated metabolic rate was then incorporated into the mechanistic cattle biotransfer models (RAIDAR, ACC-HUMAN, OMEGA, and CKow). The goodness of fit tests showed that RAIDAR, ACC-HUMAN, OMEGA model performances were significantly improved using either of the QSARs when comparing the new model outputs to observed data. The CKow model is the only one that separates the processes in the gut and liver. This model showed the lowest residual error of all the models tested when the BioWIN model was used to represent the ruminant metabolic process in the gut and the two fish QSARs were used to represent the metabolic process in the liver. Our testing included EUSES and CalTOX which are KOW-regression models that are widely used in regulatory assessment. New regressions based on the simulated rate of the two metabolic processes are also proposed as an alternative to KOW-regression models for a screening risk assessment. The modified CKow model is more physiologically realistic, but has equivalent usability to existing KOW-regression models for estimating cattle biotransfer of organic pollutants.
Resumo:
Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models – the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.
Resumo:
Academic writing has a tendency to be turgid and impenetrable. This is not only anathema to communication between academics, but also a major barrier to advancing construction industry development. Clarity in our communication is a prerequisite to effective collaboration with industry. An exploration of what it means to be an academic in a University is presented in order to provide a context for a discussion on how academics might collaborate with industry to advance development. There are conflicting agendas that pull the academic in different directions: peer group recognition, institutional success and industry development. None can be achieved without the other, which results in the need for a careful balancing act. While academics search for better understandings and provisional explanations within the context of conceptual models, industry seeks the practical application of new ideas, whether the ideas come from research or experience. Universities have a key role to play in industry development and in economic development.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Resumo:
Objective: To develop yardsticks for assessment of dental arch relationship in young individuals with repaired complete bilateral cleft lip and palate appropriate to different stages of dental development. Participants: Eleven cleft team orthodontists from five countries worked on the projects for 4 days. A total of 776 sets of standardized plaster models from 411 patients with operated complete bilateral cleft lip and palate were available for the exercise. Statistics: The interexaminer reliability was calculated using weighted kappa statistics. Results: The interrater weighted kappa scores were between .74 and .92, which is in the ""good"" to ""very good"" categories. Conclusions: Three bilateral cleft lip and palate yardsticks for different developmental stages of the dentition were made: one for the deciduous dentition (6-year-olds` yardstick), one for early mixed dentition (9-year-olds` yardstick), and one for early permanent dentition (12-year-olds` yardstick).
Resumo:
We present here new results of two-dimensional hydrodynamical simulations of the eruptive events of the 1840s (the great) and the 1890s (the minor) eruptions suffered by the massive star eta Carinae (Car). The two bipolar nebulae commonly known as the Homunculus and the little Homunculus (LH) were formed from the interaction of these eruptive events with the underlying stellar wind. We assume here an interacting, non-spherical multiple-phase wind scenario to explain the shape and the kinematics of both Homunculi, but adopt a more realistic parametrization of the phases of the wind. During the 1890s eruptive event, the outflow speed decreased for a short period of time. This fact suggests that the LH is formed when the eruption ends, from the impact of the post-outburst eta Car wind (that follows the 1890s event) with the eruptive flow (rather than by the collision of the eruptive flow with the pre-outburst wind, as claimed in previous models; Gonzalez et al.). Our simulations reproduce quite well the shape and the observed expansion speed of the large Homunculus. The LH (which is embedded within the large Homunculus) becomes Rayleigh-Taylor unstable and develop filamentary structures that resemble the spatial features observed in the polar caps. In addition, we find that the interior cavity between the two Homunculi is partially filled by material that is expelled during the decades following the great eruption. This result may be connected with the observed double-shell structure in the polar lobes of the eta Car nebula. Finally, as in previous work, we find the formation of tenuous, equatorial, high-speed features that seem to be related to the observed equatorial skirt of eta Car.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
Upper-mantle seismic anisotropy has been extensively used to infer both present and past deformation processes at lithospheric and asthenospheric depths. Analysis of shear-wave splitting (mainly from core-refracted SKS phases) provides information regarding upper-mantle anisotropy. We present average measurements of fast-polarization directions at 21 new sites in poorly sampled regions of intra-plate South America, such as northern and northeastern Brazil. Despite sparse data coverage for the South American stable platform, consistent orientations are observed over hundreds of kilometers. Over most of the continent, the fast-polarization direction tends to be close to the absolute plate motion direction given by the hotspot reference model HS3-NUVEL-1A. A previous global comparison of the SKS fast-polarization directions with flow models of the upper mantle showed relatively poor correlation on the continents, which was interpreted as evidence for a large contribution of ""frozen"" anisotropy in the lithosphere. For the South American plate, our data indicate that one of the reasons for the poor correlation may have been the relatively coarse model of lithospheric thicknesses. We suggest that improved models of upper-mantle flow that are based on more detailed lithospheric thicknesses in South America may help to explain most of the observed anisotropy patterns.
Resumo:
Early American crania show a different morphological pattern from the one shared by late Native Americans. Although the origin of the diachronic morphological diversity seen on the continents is still debated, the distinct morphology of early Americans is well documented and widely dispersed. This morphology has been described extensively for South America, where larger samples are available. Here we test the hypotheses that the morphology of Early Americans results from retention of the morphological pattern of Late Pleistocene modern humans and that the occupation of the New World precedes the morphological differentiation that gave rise to recent Eurasian and American morphology. We compare Early American samples with European Upper Paleolithic skulls, the East Asian Zhoukoudian Upper Cave specimens and a series of 20 modern human reference crania. Canonical Analysis and Minimum Spanning Tree were used to assess the morphological affinities among the series, while Mantel and Dow-Cheverud tests based on Mahalanobis Squared Distances were used to test different evolutionary scenarios. Our results show strong morphological affinities among the early series irrespective of geographical origin, which together with the matrix analyses results favor the scenario of a late morphological differentiation of modern humans. We conclude that the geographic differentiation of modern human morphology is a late phenomenon that occurred after the initial settlement of the Americas. Am J Phys Anthropol 144:442-453, 2011. (c) 2010 Wiley-Liss, Inc.
Resumo:
Leiopelma hochstetteri is an endangered New Zealand frog now confined to isolated populations scattered across the North Island. A better understanding of its past, current and predicted future environmental suitability will contribute to its conservation which is in jeopardy due to human activities, feral predators, disease and climate change. Here we use ecological niche modelling with all known occurrence data (N = 1708) and six determinant environmental variables to elucidate current, pre-human and future environmental suitability of this species. Comparison among independent runs, subfossil records and a clamping method allow validation of models. Many areas identified as currently suitable do not host any known populations. This apparent discrepancy could be explained by several non exclusive hypotheses: the areas have not been adequately surveyed and undiscovered populations still remain, the model is over simplistic; the species` sensitivity to fragmentation and small population size; biotic interactions; historical events. An additional outcome is that apparently suitable, but frog-less areas could be targeted for future translocations. Surprisingly, pre-human conditions do not differ markedly highlighting the possibility that the range of the species was broadly fragmented before human arrival. Nevertheless, some populations, particularly on the west of the North Island may have disappeared as a result of human mediated habitat modification. Future conditions are marked with higher temperatures, which are predicted to be favourable to the species. However, such virtual gain in suitable range will probably not benefit the species given the highly fragmented nature of existing habitat and the low dispersal ability of this species. (C) 2010 Elsevier Ltd. All rights reserved.