21 resultados para Kaldor trade model
em CentAUR: Central Archive University of Reading - UK
Resumo:
In this study, the mechanisms leading to the El Nino peak and demise are explored through a coupled general circulation model ensemble approach evaluated against observations. The results here suggest that the timing of the peak and demise for intense El Nino events is highly predictable as the evolution of the coupled system is strongly driven by a southward shift of the intense equatorial Pacific westerly anomalies during boreal winter. In fact, this systematic late-year shift drives an intense eastern Pacific thermocline shallowing, constraining a rapid El Nino demise in the following months. This wind shift results from a southward displacement in winter of the central Pacific warmest SSTs in response to the seasonal evolution of solar insolation. In contrast, the intensity of this seasonal feedback mechanism and its impact on the coupled system are significantly weaker in moderate El Nino events, resulting in a less pronounced thermocline shallowing. This shallowing transfers the coupled system into an unstable state in spring but is not sufficient to systematically constrain the equatorial Pacific evolution toward a rapid El Nino termination. However, for some moderate events, the occurrence of intense easterly wind anomalies in the eastern Pacific during that period initiate a rapid surge of cold SSTs leading to La Nina conditions. In other cases, weaker trade winds combined with a slightly deeper thermocline allow the coupled system to maintain a broad warm phase evolving through the entire spring and summer and a delayed El Nino demise, an evolution that is similar to the prolonged 1986/87 El Nino event. La Nina events also show a similar tendency to peak in boreal winter, with characteristics and mechanisms mainly symmetric to those described for moderate El Nino cases.
Resumo:
To gain a new perspective on the interaction of the Atlantic Ocean and the atmosphere, the relationship between the atmospheric and oceanic meridional energy transports is studied in a version of HadCM3, the U.K. Hadley Centre's coupled climate model. The correlation structure of the energy transports in the atmosphere and Atlantic Ocean as a function of latitude, and the cross correlation between the two systems are analyzed. The processes that give rise to the correlations are then elucidated using regression analyses. In northern midlatitudes, the interannual variability of the Atlantic Ocean energy transport is dominated by Ekman processes. Anticorrelated zonal winds in the subtropics and midlatitudes, particularly associated with the North Atlantic Oscillation (NAO), drive anticorrelated meridional Ekman transports. Variability in the atmospheric energy transport is associated with changes in the stationary waves, but is only weakly related to the NAO. Nevertheless, atmospheric driving of the oceanic Ekman transports is responsible for a bipolar pattern in the correlation between the atmosphere and Atlantic Ocean energy transports. In the Tropics, the interannual variability of the Atlantic Ocean energy transport is dominated by an adjustment of the tropical ocean to coastal upwelling induced along the Venezuelan coast by a strengthening of the easterly trade winds. Variability in the atmospheric energy transport is associated with a cross-equatorial meridional overturning circulation that is only weakly associated with variability in the trade winds along the Venezuelan coast. In consequence, there is only very limited correlation between the atmosphere and Atlantic Ocean energy transports in the Tropics of HadCM3
Resumo:
The modelled El Nino-mean state-seasonal cycle interactions in 23 coupled ocean-atmosphere GCMs, including the recent IPCC AR4 models, are assessed and compared to observations and theory. The models show a clear improvement over previous generations in simulating the tropical Pacific climatology. Systematic biases still include too strong mean and seasonal cycle of trade winds. El Nino amplitude is shown to be an inverse function of the mean trade winds in agreement with the observed shift of 1976 and with theoretical studies. El Nino amplitude is further shown to be an inverse function of the relative strength of the seasonal cycle. When most of the energy is within the seasonal cycle, little is left for inter-annual signals and vice versa. An interannual coupling strength (ICS) is defined and its relation with the modelled El Nino frequency is compared to that predicted by theoretical models. An assessment of the modelled El Nino in term of SST mode (S-mode) or thermocline mode (T-mode) shows that most models are locked into a S-mode and that only a few models exhibit a hybrid mode, like in observations. It is concluded that several basic El Nino-mean state-seasonal cycle relationships proposed by either theory or analysis of observations seem to be reproduced by CGCMs. This is especially true for the amplitude of El Nino and is less clear for its frequency. Most of these relationships, first established for the pre-industrial control simulations, hold for the double and quadruple CO2 stabilized scenarios. The models that exhibit the largest El Nino amplitude change in these greenhouse gas (GHG) increase scenarios are those that exhibit a mode change towards a T-mode (either from S-mode to hybrid or hybrid to T-mode). This follows the observed 1976 climate shift in the tropical Pacific, and supports the-still debated-finding of studies that associated this shift to increased GHGs. In many respects, these models are also among those that best simulate the tropical Pacific climatology (ECHAM5/MPI-OM, GFDL-CM2.0, GFDL-CM2.1, MRI-CGM2.3.2, UKMO-HadCM3). Results from this large subset of models suggest the likelihood of increased El Nino amplitude in a warmer climate, though there is considerable spread of El Nino behaviour among the models and the changes in the subsurface thermocline properties that may be important for El Nino change could not be assessed. There are no clear indications of an El Nino frequency change with increased GHG.
Resumo:
Using mixed logit models to analyse choice data is common but requires ex ante specification of the functional forms of preference distributions. We make the case for greater use of bounded functional forms and propose the use of the Marginal Likelihood, calculated using Bayesian techniques, as a single measure of model performance across non nested mixed logit specifications. Using this measure leads to very different rankings of model specifications compared to alternative rule of thumb measures. The approach is illustrated using data from a choice experiment regarding GM food types which provides insights regarding the recent WTO dispute between the EU and the US, Canada and Argentina and whether labelling and trade regimes should be based on the production process or product composition.
Resumo:
This article examines shock persistence in agricultural and industrial output in India. Drawing on the dual economy literature, the linkages between the sectors through the terms of trade are emphasised. However different dual economy models make differing assumptions regarding the categorisation of variables as being either endogenous or exogenous and this distinction is crucial in explaining the pattern of shock persistence. Using annual data for 1955-95, our results show that shocks to both output series are permanent while those to the terms of trade are transient.
Resumo:
Question: What are the key physiological and life-history trade-offs responsible for the evolution of different suites of plant traits (strategies) in different environments? Experimental methods: Common-garden experiments were performed on physiologically realistic model plants, evolved in contrasting environments, in computer simulations. This allowed the identification of the trade-offs that resulted in different suites of traits (strategies). The environments considered were: resource rich, low disturbance (competitive); resource poor, low disturbance (stressed); resource rich, high disturbance (disturbed); and stressed environments containing herbivores (grazed). Results: In disturbed environments, plants increased reproduction at the expense of ability to compete for light and nitrogen. In competitive environments, plants traded off reproductive output and leaf production for vertical growth. In stressed environments, plants traded off vertical growth and reproductive output for nitrogen acquisition, contradicting Grime's (2001) theory that slow-growing, competitively inferior strategies are selected in stressed environments. The contradiction is partly resolved by incorporating herbivores into the stressed environment, which selects for increased investment in defence, at the expense of competitive ability and reproduction. Conclusion: Our explicit modelling of trade-offs produces rigorous testable explanations of observed associations between suites of traits and environments.
Resumo:
Design management research usually deals with the processes within the professional design team and yet, in the UK, the volume of the total project information produced by the specialist trade contractors equals or exceeds that produced by the design team. There is a need to understand the scale of this production task and to plan and manage it accordingly. The model of the process on which the plan is to be based, while generic, must be sufficiently robust to cover the majority of instances. An approach using design elements, in sufficient depth to possibly develop tools for a predictive model of the process, is described. The starting point is that each construction element and its components have a generic sequence of design activities. Specific requirements tailor the element's application to the building. Then there are the constraints produced due to the interaction with other elements. Therefore, the selection of a component within the element may impose a set of constraints that will affect the choice of other design elements. Thus, a design decision can be seen as an interrelated element-constraint-element (ECE) sub-net. To illustrate this approach, an example of the process within precast concrete cladding has been used.
Resumo:
Design management research usually deals with the processes within the professional design team and yet, in the UK, the volume of the total project information produced by the specialist trade contractors equals or exceeds that produced by the design team. There is a need to understand the scale of this production task and to plan and manage it accordingly. The model of the process on which the plan is to be based, while generic, must be sufficiently robust to cover the majority of instances. An approach using design elements, in sufficient depth to possibly develop tools for a predictive model of the process, is described. The starting point is that each construction element and its components have a generic sequence of design activities. Specific requirements tailor the element's application to the building. Then there are the constraints produced due to the interaction with other elements. Therefore, the selection of a component within the element may impose a set of constraints that will affect the choice of other design elements. Thus, a design decision can be seen as an interrelated element-constraint-element (ECE) sub-net. To illustrate this approach, an example of the process within precast concrete cladding has been used.
Resumo:
There is a growing concern in reducing greenhouse gas emissions all over the world. The U.K. has set 34% target reduction of emission before 2020 and 80% before 2050 compared to 1990 recently in Post Copenhagen Report on Climate Change. In practise, Life Cycle Cost (LCC) and Life Cycle Assessment (LCA) tools have been introduced to construction industry in order to achieve this such as. However, there is clear a disconnection between costs and environmental impacts over the life cycle of a built asset when using these two tools. Besides, the changes in Information and Communication Technologies (ICTs) lead to a change in the way information is represented, in particular, information is being fed more easily and distributed more quickly to different stakeholders by the use of tool such as the Building Information Modelling (BIM), with little consideration on incorporating LCC and LCA and their maximised usage within the BIM environment. The aim of this paper is to propose the development of a model-based LCC and LCA tool in order to provide sustainable building design decisions for clients, architects and quantity surveyors, by then an optimal investment decision can be made by studying the trade-off between costs and environmental impacts. An application framework is also proposed finally as the future work that shows how the proposed model can be incorporated into the BIM environment in practise.
Resumo:
Background Efficient gene expression involves a trade-off between (i) premature termination of protein synthesis; and (ii) readthrough, where the ribosome fails to dissociate at the terminal stop. Sense codons that are similar in sequence to stop codons are more susceptible to nonsense mutation, and are also likely to be more susceptible to transcriptional or translational errors causing premature termination. We therefore expect this trade-off to be influenced by the number of stop codons in the genetic code. Although genetic codes are highly constrained, stop codon number appears to be their most volatile feature. Results In the human genome, codons readily mutable to stops are underrepresented in coding sequences. We construct a simple mathematical model based on the relative likelihoods of premature termination and readthrough. When readthrough occurs, the resultant protein has a tail of amino acid residues incorrectly added to the C-terminus. Our results depend strongly on the number of stop codons in the genetic code. When the code has more stop codons, premature termination is relatively more likely, particularly for longer genes. When the code has fewer stop codons, the length of the tail added by readthrough will, on average, be longer, and thus more deleterious. Comparative analysis of taxa with a range of stop codon numbers suggests that genomes whose code includes more stop codons have shorter coding sequences. Conclusions We suggest that the differing trade-offs presented by alternative genetic codes may result in differences in genome structure. More speculatively, multiple stop codons may mitigate readthrough, counteracting the disadvantage of a higher rate of nonsense mutation. This could help explain the puzzling overrepresentation of stop codons in the canonical genetic code and most variants.
Resumo:
Government and institutionally-driven ‘good practice transfer’ initiatives are consistently presented as a means to enhance construction firm and industry performance. Two implicit tenets of these initiatives appear to be: knowledge embedded in good practice will transfer automatically; and, the potential of implementing good practice will be capitalised regardless of the context where it is to be used. The validity of these tenets is increasingly being questioned and, concurrently, more nuanced knowledge production understandings are being developed which recognise and incorporate context-specificity. This research contributes to this growing, more critical agenda by examining the actual benefits accrued from good practice transfer from the perspective of a small specialist trade contracting firm. A concept model for successful good practice transfer is developed from a single longitudinal case study within a small heating and plumbing firm. The concept model consists of five key variables: environment, strategy, people, technology, and organisation of work. The key findings challenge the implicit assumptions prevailing in the existing literature and support a contingency approach that argues successful good practice transfer is not just adopting and mechanistically inserting into the firm, but requires addressing ‘behavioural’ aspects. For successful good practice transfer, small specialist trade contracting firms need to develop and operationalise organisation slack, mechanisms for scanning external stimuli and absorbing knowledge. They also need to formulate and communicate client-driven external strategies; to motive and educate people at all levels; to possess internal or accessible complementary skills and knowledge; to have ‘soft focus’ immediate/mid-term benefits at a project level; and, to embed good practice in current work practices.
Resumo:
We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.
Resumo:
We use Hasbrouck's (1991) vector autoregressive model for prices and trades to empirically test and assess the role played by the waiting time between consecutive transactions in the process of price formation. We find that as the time duration between transactions decreases, the price impact of trades, the speed of price adjustment to trade‐related information, and the positive autocorrelation of signed trades all increase. This suggests that times when markets are most active are times when there is an increased presence of informed traders; we interpret such markets as having reduced liquidity.
Resumo:
For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.
Resumo:
An extensive off-line evaluation of the Noah/Single Layer Urban Canopy Model (Noah/SLUCM) urban land-surface model is presented using data from 15 sites to assess (1) the ability of the scheme to reproduce the surface energy balance observed in a range of urban environments, including seasonal changes, and (2) the impact of increasing complexity of input parameter information. Model performance is found to be most dependent on representation of vegetated surface area cover; refinement of other parameter values leads to smaller improvements. Model biases in net all-wave radiation and trade-offs between turbulent heat fluxes are highlighted using an optimization algorithm. Here we use the Urban Zones to characterize Energy partitioning (UZE) as the basis to assign default SLUCM parameter values. A methodology (FRAISE) to assign sites (or areas) to one of these categories based on surface characteristics is evaluated. Using three urban sites from the Basel Urban Boundary Layer Experiment (BUBBLE) dataset, an independent evaluation of the model performance with the parameter values representative of each class is performed. The scheme copes well with both seasonal changes in the surface characteristics and intra-urban heterogeneities in energy flux partitioning, with RMSE performance comparable to similar state-of-the-art models for all fluxes, sites and seasons. The potential of the methodology for high-resolution atmospheric modelling application using the Weather Research and Forecasting (WRF) model is highlighted. This analysis supports the recommendations that (1) three classes are appropriate to characterize the urban environment, and (2) that the parameter values identified should be adopted as default values in WRF.