16 resultados para Applied general equilibrium
em Aston University Research Archive
Resumo:
I model the forward premium in the U.K. gilt-edged market over the period 1982–96 using a two-factor general equilibrium model of the term structure of interest rates. The model permits the decomposition of the forward premium into separate components representing interest rate expectations, the risk premia associated with each of the underlying factors, and terms capturing the direct impact of the variances of the factors on the shape of the forward curve.
Resumo:
This paper explores optimal biofuel subsidies in a general equilibrium trade model. The focus is on the production of biofuels such as corn-based ethanol, which diverts corn from use as food. In the small-country case, when the tax on crude is not available as a policy option, a second-best biofuel subsidy may or may not be positive. In the large-country case, the twin objectives of pollution reduction and terms-of-trade improvement justify a combination of crude tax and biofuel subsidy for the food exporter. Finally, we show that when both nations engage in biofuel policies, the terms-of-trade effects encourage the Nash equilibrium subsidy to be positive (negative) for the food exporting (importing) nation. © 2013 John Wiley & Sons Ltd.
Resumo:
We use two general equilibrium models to explain why changes in the external economic environment result in pro-cyclical aggregate dividend payout behavior. Both models that we consider endogenize low elasticity of investment. The first model incorporates capital adjustment costs, while the second one assumes that risk-averse managers maximize their own objective function rather than shareholder wealth. We show that, while both models generate pro-cyclical aggregate dividends, a feature consistent with the observed business-cycle pattern of payouts from well-diversified portfolios, the second model provides a more likely explanation for this effect. Our findings emphasize the importance of incorporating agency conflicts when considering the relationship between the external economic environment and the financial behavior of businesses.
Resumo:
A periodic density functional theory method using the B3LYP hybrid exchange-correlation potential is applied to the Prussian blue analogue RbMn[Fe(CN)6] to evaluate the suitability of the method for studying, and predicting, the photomagnetic behavior of Prussian blue analogues and related materials. The method allows correct description of the equilibrium structures of the different electronic configurations with regard to the cell parameters and bond distances. In agreement with the experimental data, the calculations have shown that the low-temperature phase (LT; Fe(2+)(t(6)2g, S = 0)-CN-Mn(3+)(t(3)2g e(1)g, S = 2)) is the stable phase at low temperature instead of the high-temperature phase (HT; Fe(3+)(t(5)2g, S = 1/2)-CN-Mn(2+)(t(3)2g e(2)g, S = 5/2)). Additionally, the method gives an estimation for the enthalpy difference (HT LT) with a value of 143 J mol(-1) K(-1). The comparison of our calculations with experimental data from the literature and from our calorimetric and X-ray photoelectron spectroscopy measurements on the Rb0.97Mn[Fe(CN)6]0.98 x 1.03 H2O compound is analyzed, and in general, a satisfactory agreement is obtained. The method also predicts the metastable nature of the electronic configuration of the high-temperature phase, a necessary condition to photoinduce that phase at low temperatures. It gives a photoactivation energy of 2.36 eV, which is in agreement with photoinduced demagnetization produced by a green laser.
Resumo:
A total pressure apparatus has been developed to measure vapour-liquid equilibrium data on binary mixtures at atmospheric and sub-atmospheric pressures. The method gives isothermal data which can be obtained rapidly. Only measurements of total pressure are made as a direct function of composition of synthetic liquid phase composition, the vapour phase composition being deduced through the Gibbs-Duhem relationship. The need to analyse either of the phases is eliminated. As such the errors introduced by sampling and analysis are removed. The essential requirements are that the pure components be degassed completely since any deficiency in degassing would introduce errors into the measured pressures. A similarly essential requirement was that the central apparatus would have to be absolutely leak-tight as any leakage of air either in or out of the apparatus would introduce erroneous pressure readings. The apparatus was commissioned by measuring the saturated vapour pressures of both degassed water and ethanol as a function of temperature. The pressure-temperature data on degassed water measured were directly compared with data in the literature, with good agreement. Similarly the pressure-temperature data were measured for ethanol, methanol and cyclohexane and where possible a direct comparison made with the literature data. Good agreement between the pure component data of this work and those available in the literature demonstrates firstly that a satisfactory degassing procedure has been achieved and that secondly the measurements of pressure-temperature are consistent for any one component; since this is true for a number of components, the measurements of both temperature and pressure are both self-consistent and of sufficient accuracy, with an observed compatibility between the precision/accuracy of the separate means of measuring pressure and temperature. The liquid mixtures studied were of ethanol-water, methanol-water and ethanol-cyclohexane. The total pressure was measured as the composition inside the equilibrium cell was varied at a set temperature. This gave P-T-x data sets for each mixture at a range of temperatures. A standard fitting-package from the literature was used to reduce the raw data to yield y-values to complete the x-y-P-T data sets. A consistency test could not be applied to the P-T-x data set as no y-values were obtained during the experimental measurements. In general satisfactory agreement was found between the data of this work and those available in the literature. For some runs discrepancies were observed, and further work recommended to eliminate the problems identified.
Resumo:
Principal components analysis (PCA) has been described for over 50 years; however, it is rarely applied to the analysis of epidemiological data. In this study PCA was critically appraised in its ability to reveal relationships between pulsed-field gel electrophoresis (PFGE) profiles of methicillin- resistant Staphylococcus aureus (MRSA) in comparison to the more commonly employed cluster analysis and representation by dendrograms. The PFGE type following SmaI chromosomal digest was determined for 44 multidrug-resistant hospital-acquired methicillin-resistant S. aureus (MR-HA-MRSA) isolates, two multidrug-resistant community-acquired MRSA (MR-CA-MRSA), 50 hospital-acquired MRSA (HA-MRSA) isolates (from the University Hospital Birmingham, NHS Trust, UK) and 34 community-acquired MRSA (CA-MRSA) isolates (from general practitioners in Birmingham, UK). Strain relatedness was determined using Dice band-matching with UPGMA clustering and PCA. The results indicated that PCA revealed relationships between MRSA strains, which were more strongly correlated with known epidemiology, most likely because, unlike cluster analysis, PCA does not have the constraint of generating a hierarchic classification. In addition, PCA provides the opportunity for further analysis to identify key polymorphic bands within complex genotypic profiles, which is not always possible with dendrograms. Here we provide a detailed description of a PCA method for the analysis of PFGE profiles to complement further the epidemiological study of infectious disease. © 2005 Elsevier B.V. All rights reserved.
Resumo:
This paper examined the joint predictive effects of trait emotional intelligence (trait-EI), Extraversion, Conscientiousness, and Neuroticism on 2 facets of general well-being and job satisfaction. An employed community sample of 123 individuals from the Indian subcontinent participated in the study, and completed measures of the five-factor model of personality, trait-EI, job satisfaction, and general well-being facets worn-out and up-tight. Trait-EI was related but distinct from the 3 personality variables. Trait-EI demonstrated the strongest correlation with job satisfaction, but predicted general well-being no better than Neuroticism. In regression analyses, trait-EI predicted between 6% and 9% additional variance in the well-being criteria, beyond the 3 personality traits. It was concluded that trait-EI may be useful in examining dispositional influences on psychological well-being.
Resumo:
A new general linear model (GLM) beamformer method is described for processing magnetoencephalography (MEG) data. A standard nonlinear beamformer is used to determine the time course of neuronal activation for each point in a predefined source space. A Hilbert transform gives the envelope of oscillatory activity at each location in any chosen frequency band (not necessary in the case of sustained (DC) fields), enabling the general linear model to be applied and a volumetric T statistic image to be determined. The new method is illustrated by a two-source simulation (sustained field and 20 Hz) and is shown to provide accurate localization. The method is also shown to locate accurately the increasing and decreasing gamma activities to the temporal and frontal lobes, respectively, in the case of a scintillating scotoma. The new method brings the advantages of the general linear model to the analysis of MEG data and should prove useful for the localization of changing patterns of activity across all frequency ranges including DC (sustained fields). © 2004 Elsevier Inc. All rights reserved.
Resumo:
Inference and optimization of real-value edge variables in sparse graphs are studied using the Bethe approximation and replica method of statistical physics. Equilibrium states of general energy functions involving a large set of real edge variables that interact at the network nodes are obtained in various cases. When applied to the representative problem of network resource allocation, efficient distributed algorithms are also devised. Scaling properties with respect to the network connectivity and the resource availability are found, and links to probabilistic Bayesian approximation methods are established. Different cost measures are considered and algorithmic solutions in the various cases are devised and examined numerically. Simulation results are in full agreement with the theory. © 2007 The American Physical Society.
Resumo:
A study of vapour-liquid equilibria is presented together with current developments. The theory of vapour-liquid equilibria is discussed. Both experimental and prediction methods for obtaining vapour-liquid equilibria data are critically reviewed. The development of a new family of equilibrium stills to measure experimental VLE data from sub-atmosphere to 35 bar pressure is described. Existing experimental techniques are reviewed, to highlight the needs for these new apparati and their major attributes. Details are provided of how apparatus may be further improved and how computer control may be implemented. To provide a rigorous test of the apparatus the stills have been commissioned using acetic acid-water mixture at one atmosphere pressure. A Barker-type consistency test computer program, which allows for association in both phases has been applied to the data generated and clearly shows that the stills produce data of a very high quality. Two high quality data sets, for the mixture acetone-chloroform, have been generated at one atmosphere and 64.3oC. These data are used to investigate the ability of the new novel technique, based on molecular parameters, to predict VLE data for highly polar mixtures. Eight, vapour-liquid equilibrium data sets have been produced for the cyclohexane-ethanol mixture at one atmosphere, 2, 4, 6, 8 and 11 bar, 90.9oC and 132.8oC. These data sets have been tested for thermodynamic consistency using a Barker-type fitting package and shown to be of high quality. The data have been used to investigate the dependence of UNIQUAC parameters with temperature. The data have in addition been used to compare directly the performance of the predictive methods - Original UNIFAC, a modified version of UNIFAC, and the new novel technique, based on molecular parameters developed from generalised London's potential (GLP) theory.
Resumo:
The further development of the use of NMR relaxation times in chemical, biological and medical research has perhaps been curtailed by the length of time these measurements often take. The DESPOT (Driven Equilibrium Single Pulse Observation of T1) method has been developed, which reduces the time required to make a T1 measurement by a factor of up to 100. The technique has been studied extensively herein and the thesis contains recommendations for its successful experimental application. Modified DESPOT type equations for use when T2 relaxation is incomplete or where off-resonance effects are thought to be significant are also presented. A recently reported application of the DESPOT technique to MR imaging gave good initial results but suffered from the fact that the images were derived from spin systems that were not driven to equilibrium. An approach which allows equilibrium to be obtained with only one non-acquisition sequence is presented herein and should prove invaluable in variable contrast imaging. A DESPOT type approach has also been successfully applied to the measurement of T1. T_1's can be measured, using this approach significantly faster than by the use of the classical method. The new method also provides a value for T1 simultaneously and therefore the technique should prove valuable in intermediate energy barrier chemical exchange studies. The method also gives rise to the possibility of obtaining simultaneous T1 and T1 MR images. The DESPOT technique depends on rapid multipulsing at nutation angles, normally less than 90^o. Work in this area has highlighted the possible time saving for spectral acquisition over the classical technique (90^o-5T_1)_n. A new method based on these principles has been developed which permits the rapid multipulsing of samples to give T_1 and M_0 ratio information. The time needed, however, is only slightly longer than would be required to determine the M_0 ratio alone using the classical technique. In ^1H decoupled ^13C spectroscopy the method also gives nOe ratio information for the individual absorptions in the spectrum.
Resumo:
This investigation examined the process of the longitudinal rolling of tubes through a set of three driven grooved rolls. Tubes were rolled with or without internal support i.e. under mandrel rolling or sinking conditions. Knowledge was required of the way in which the roll separating force and rolling torque vary for different conditions of rolling. The objective of this work being to obtain a better understanding and optimization of the mechanics of the process. The design and instrumentation of a complete experimental three-roll mill for the rolling of lead tube as an analogue material for hot steel, with the measurement of the individual roll force and torque is described. A novel type of roll load cell was incorporated and its design and testing discussed. Employing three roll sizes of 170 mm, 255 mm and 340 mm shroud diameter, precise tube specimens of various tube diameter to thickness ratios were rolled under sinking and mandrel rolling conditions. To obtain an indication of the tube-roll contact areas some of the specimens were partially rolled. For comparative purposes the remaining tubes were completely rolled as a single pass. The roll forces, torques and tube parameters e.g. reduction of area, D/t ratio, were collated and compared for each of the three roll diameters considered. The influence of friction, particularly in the mandrel rolling process, was commented upon. Theoretical studies utilising the equilibrium and energy methods were applied to both the sinking and mandrel rolling processes. In general, the energy approach gave better comparison with experiment, especially for mandrel rolling. The influence of the tube deformation zones on the two processes was observed and on the subsequent modification of the tube-roll arc contact length. A rudimentary attempt was made in the theoretical sinking analysis to allow for the deformation zone prior to roll contact; some success was noted. A general survey of the available tube rolling literature, for both the sinking and mandrel processes has been carried out.
Resumo:
The Alborz Mountain range separates the northern part of Iran from the southern part. It also isolates a narrow coastal strip to the south of the Caspian Sea from the Central Iran plateau. Communication between the south and north until the 1950's was via two roads and one rail link. In 1963 work was completed on a major access road via the Haraz Valley (the most physically hostile area in the region). From the beginning the road was plagued by accidents resulting from unstable slopes on either side of the valley. Heavy casualties persuaded the government to undertake major engineering works to eliminate ''black spots" and make the road safe. However, despite substantial and prolonged expenditure the problems were not solved and casualties increased steadily due to the increase in traffic using the road. Another road was built to bypass the Haraz road and opened to traffic in 1983. But closure of the Haraz road was still impossible because of the growth of settlements along the route and the need for access to other installations such as the Lar Dam. The aim of this research was to explore the possibility of applying Landsat MSS imagery to locating black spots along the road and the instability problems. Landsat data had not previously been applied to highway engineering problems in the study area. Aerial photographs are better in general than satellite images for detailed mapping, but Landsat images are superior for reconnaissance and adequate for mapping at the 1 :250,000 scale. The broad overview and lack of distortion in the Landsat imagery make the images ideal for structural interpretation. The results of Landsat digital image analysis showed that certain rock types and structural features can be delineated and mapped. The most unstable areas comprising steep slopes, free of vegetation cover can be identified using image processing techniques. Structural lineaments revealed from the image analysis led to improved results (delineation of unstable features). Damavand Quaternary volcanics were found to be the dominant rock type along a 40 km stretch of the road. These rock types are inherently unstable and partly responsible for the difficulties along the road. For more detailed geological and morphological interpretation a sample of small subscenes was selected and analysed. A special developed image analysis package was designed at Aston for use on a non specialized computing system. Using this package a new and unique method for image classification was developed, allowing accurate delineation of the critical features of the study area.
Resumo:
The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.
Resumo:
Our research examines a key aspect of the extensive bureaucratic reform program that was applied to the Indonesian public sector following the Asian Economic crisis. The organisation we focus on is the Indonesian Directorate of Tax. The reforms moved the case organisation towards more bureaucratic organisational arrangements. The most notable elements of the reforms related to the organisational efficiency and changes in administrative style and culture. An ethnographic approach was adopted, in which the researcher was immersed in the life of the selected case organisation over an extended period of time. This research extends a thin literature on the topic of management control and culture in the Indonesian context. Also, this paper fills a gap in the theoretic approaches for studying bureaucracy, which is dominated by western conceptualisations. This paper provides a reminder to policy makers (including organisation such as the World Bank and the International Monetary Fund) of the consequences of neglecting cultural influences when conducting bureaucratic reform.