857 resultados para Real and nominal effective exchange rates
Resumo:
Nitrogen is an essential nutrient. It is for human, animal and plants a constituent element of proteins and nucleic acids. Although the majority of the Earth’s atmosphere consists of elemental nitrogen (N2, 78 %) only a few microorganisms can use it directly. To be useful for higher plants and animals elemental nitrogen must be converted to a reactive oxidized form. This conversion happens within the nitrogen cycle by free-living microorganisms, symbiotic living Rhizobium bacteria or by lightning. Humans are able to synthesize reactive nitrogen through the Haber-Bosch process since the beginning of the 20th century. As a result food security of the world population could be improved noticeably. On the other side the increased nitrogen input results in acidification and eutrophication of ecosystems and in loss of biodiversity. Negative health effects arose for humans such as fine particulate matter and summer smog. Furthermore, reactive nitrogen plays a decisive role at atmospheric chemistry and global cycles of pollutants and nutritive substances.rnNitrogen monoxide (NO) and nitrogen dioxide (NO2) belong to the reactive trace gases and are grouped under the generic term NOx. They are important components of atmospheric oxidative processes and influence the lifetime of various less reactive greenhouse gases. NO and NO2 are generated amongst others at combustion process by oxidation of atmospheric nitrogen as well as by biological processes within soil. In atmosphere NO is converted very quickly into NO2. NO2 is than oxidized to nitrate (NO3-) and to nitric acid (HNO3), which bounds to aerosol particles. The bounded nitrate is finally washed out from atmosphere by dry and wet deposition. Catalytic reactions of NOx are an important part of atmospheric chemistry forming or decomposing tropospheric ozone (O3). In atmosphere NO, NO2 and O3 are in photosta¬tionary equilibrium, therefore it is referred as NO-NO2-O3 triad. At regions with elevated NO concentrations reactions with air pollutions can form NO2, altering equilibrium of ozone formation.rnThe essential nutrient nitrogen is taken up by plants mainly by dissolved NO3- entering the roots. Atmospheric nitrogen is oxidized to NO3- within soil via bacteria by nitrogen fixation or ammonium formation and nitrification. Additionally atmospheric NO2 uptake occurs directly by stomata. Inside the apoplast NO2 is disproportionated to nitrate and nitrite (NO2-), which can enter the plant metabolic processes. The enzymes nitrate and nitrite reductase convert nitrate and nitrite to ammonium (NH4+). NO2 gas exchange is controlled by pressure gradients inside the leaves, the stomatal aperture and leaf resistances. Plant stomatal regulation is affected by climate factors like light intensity, temperature and water vapor pressure deficit. rnThis thesis wants to contribute to the comprehension of the effects of vegetation in the atmospheric NO2 cycle and to discuss the NO2 compensation point concentration (mcomp,NO2). Therefore, NO2 exchange between the atmosphere and spruce (Picea abies) on leaf level was detected by a dynamic plant chamber system under labo¬ratory and field conditions. Measurements took place during the EGER project (June-July 2008). Additionally NO2 data collected during the ECHO project (July 2003) on oak (Quercus robur) were analyzed. The used measuring system allowed simultaneously determina¬tion of NO, NO2, O3, CO2 and H2O exchange rates. Calculations of NO, NO2 and O3 fluxes based on generally small differences (∆mi) measured between inlet and outlet of the chamber. Consequently a high accuracy and specificity of the analyzer is necessary. To achieve these requirements a highly specific NO/NO2 analyzer was used and the whole measurement system was optimized to an enduring measurement precision.rnData analysis resulted in a significant mcomp,NO2 only if statistical significance of ∆mi was detected. Consequently, significance of ∆mi was used as a data quality criterion. Photo-chemical reactions of the NO-NO2-O3 triad in the dynamic plant chamber’s volume must be considered for the determination of NO, NO2, O3 exchange rates, other¬wise deposition velocity (vdep,NO2) and mcomp,NO2 will be overestimated. No significant mcomp,NO2 for spruce could be determined under laboratory conditions, but under field conditions mcomp,NO2 could be identified between 0.17 and 0.65 ppb and vdep,NO2 between 0.07 and 0.42 mm s-1. Analyzing field data of oak, no NO2 compensation point concentration could be determined, vdep,NO2 ranged between 0.6 and 2.71 mm s-1. There is increasing indication that forests are mainly a sink for NO2 and potential NO2 emissions are low. Only when assuming high NO soil emissions, more NO2 can be formed by reaction with O3 than plants are able to take up. Under these circumstance forests can be a source for NO2.
Resumo:
Die salpetrige Säure (HONO) ist eine der reaktiven Stickstoffkomponenten der Atmosphäre und Pedosphäre. Die genauen Bildungswege von HONO, sowie der gegenseitige Austausch von HONO zwischen Atmosphäre und Pedosphäre sind noch nicht vollständig aufgedeckt. Bei der HONO-Photolyse entsteht das Hydroxylradikal (OH) und Stickstoffmonooxid (NO), was die Bedeutsamkeit von HONO für die atmosphärische Photochemie widerspiegelt.rnUm die genannte Bildung von HONO im Boden und dessen anschließenden Austausch mit der Atmosphäre zu untersuchen, wurden Messungen von Bodenproben mit dynamischen Kammern durchgeführt. Im Labor gemessene Emissionsflüsse von Wasser, NO und HONO zeigen, dass die Emission von HONO in vergleichbarem Umfang und im gleichen Bodenfeuchtebereich wie die für NO (von 6.5 bis 56.0 % WHC) stattfindet. Die Höhe der HONO-Emissionsflüsse bei neutralen bis basischen pH-Werten und die Aktivierungsenergie der HONO-Emissionsflüsse führen zu der Annahme, dass die mikrobielle Nitrifikation die Hauptquelle für die HONO-Emission darstellt. Inhibierungsexperimente mit einer Bodenprobe und die Messung einer Reinkultur von Nitrosomonas europaea bestärkten diese Theorie. Als Schlussfolgerung wurde das konzeptionelle Model der Bodenemission verschiedener Stickstoffkomponenten in Abhängigkeit von dem Wasserhaushalt des Bodens für HONO erweitert.rnIn einem weiteren Versuch wurde zum Spülen der dynamischen Kammer Luft mit erhöhtem Mischungsverhältnis von HONO verwendet. Die Messung einer hervorragend charakterisierten Bodenprobe zeigte bidirektionale Flüsse von HONO. Somit können Böden nicht nur als HONO-Quelle, sondern auch je nach Bedingungen als effektive Senke dienen. rnAußerdem konnte gezeigt werden, dass das Verhältnis von HONO- zu NO-Emissionen mit dem pH-Wert des Bodens korreliert. Grund könnte die erhöhte Reaktivität von HONO bei niedrigem pH-Wert und die längere Aufenthaltsdauer von HONO verursacht durch reduzierte Gasdiffusion im Bodenporenraum sein, da ein niedriger pH-Wert mit erhöhter Bodenfeuchte am Maximum der Emission einhergeht. Es konnte gezeigt werden, dass die effektive Diffusion von Gasen im Bodenporenraum und die effektive Diffusion von Ionen in der Bodenlösung die HONO-Produktion und den Austausch von HONO mit der Atmosphäre begrenzen. rnErgänzend zu den Messungen im Labor wurde HONO während der Messkampagne HUMPPA-COPEC 2010 im borealen Nadelwald simultan in der Höhe von 1 m über dem Boden und 2 bis 3 m über dem Blätterdach gemessen. Die Budgetberechnungen für HONO zeigen, dass für HONO sämtliche bekannte Quellen und Senken in Bezug auf die übermächtige HONO-Photolyserate tagsüber vernachlässigbar sind (< 20%). Weder Bodenemissionen von HONO, noch die Photolyse von an Oberflächen adsorbierter Salpetersäure können die fehlende Quelle erklären. Die lichtinduzierte Reduktion von Stickstoffdioxid (NO2) an Oberflächen konnte nicht ausgeschlossen werden. Es zeigte sich jedoch, dass die fehlende Quelle stärker mit der HONO-Photolyserate korreliert als mit der entsprechenden Photolysefrequenz, die proportional zur Photolysefrequenz von NO2 ist. Somit lässt sich schlussfolgern, dass entweder die Photolyserate von HONO überschätzt wird oder dass immer noch eine unbekannte, HONO-Quelle existiert, die mit der Photolyserate sehr stark korreliert. rn rn
Resumo:
The use of water suppression for in vivo proton MR spectroscopy diminishes the signal intensities from resonances that undergo magnetization exchange with water, particularly those downfield of water. To investigate these exchangeable resonances, an inversion transfer experiment was performed using the metabolite cycling technique for non-water-suppressed MR spectroscopy from a large brain voxel in 11 healthy volunteers at 3.0 T. The exchange rates of the most prominent peaks downfield of water were found to range from 0.5 to 8.9 s(-1), while the T(1) relaxation times in absence of exchange were found to range from 175 to 525 ms. These findings may help toward the assignments of the downfield resonances and a better understanding of the sources of contrast in chemical exchange saturation transfer imaging.
Resumo:
Atmospheric ammonia (NH3) exchange during a single growing season was measured over two grass/clover fields managed by cutting and treated with different rates of mineral nitrogen (N) fertilizer. The aim was to quantify the total NH3 exchange of the two systems in relation to their N budget, the latter was split into N derived from symbiotic fixation, from fertilization, and from the soil. The experimental site was located in an intensively managed agricultural area on the Swiss plateau. Two adjacent fields with mixtures of perennial ryegrass (Lolium perenne L.), cocks foot (Dactylis glomerata L.), white clover (Trifolium repens L.) and red clover (Trifolium pratense L.) were used. These were treated with either 80 or 160 kg N ha−1 applied as NH4NO3 fertilizer in equal portions after each of four cuts. Continuous NH3 flux measurements were carried out by micrometeorological techniques. To determine the contribution of each species to the overall NH3 canopy compensation point, stomatal NH3 compensation points of the individual plant species were determined on the basis of NH4+ + NH3 (NHx) concentrations and pH in the apoplast. Symbiotic N2 fixation was measured by the 15N dilution method.
Resumo:
The effects of exchange rate risk have interested researchers, since the collapse of fixed exchange rates. Little consensus exists, however, regarding its effect on exports. Previous studies implicitly assume symmetry. This paper tests the hypothesis of asymmetric effects of exchange rate risk with a dynamic conditional correlation bivariate GARCH(1,1)-M model. The asymmetry means that exchange rate risk (volatility) affects exports differently during appreciations and depreciations of the exchange rate. The data include bilateral exports from eight Asian countries to the US. The empirical results show that real exchange rate risk significantly affects exports for all countries, negative or positive, in periods of depreciation or appreciation. For five of the eight countries, the effects of exchange risk are asymmetric. Thus, policy makers can consider the stability of the exchange rate in addition to its depreciation as a method of stimulating export growth.
Resumo:
Understanding the effects of off-balance sheet transactions on interest and exchange rate exposures has become more important for emerging market countries that are experiencing remarkable growth in derivatives markets. Using firm level data, we report a significant fall in exposure over the past 10 years and relate this to higher derivatives market participation. Our methodology is composed of a three stage approach: First, we measure foreign exchange exposures using the Adler-Dumas (1984) model. Next, we follow an indirect approach to infer derivatives market participation at the firm level. Finally, we study the relationship between exchange rate exposure and derivatives market participation. Our results show that foreign exchange exposure is negatively related to derivatives market participation, and support the hedging explanation of the exchange rate exposure puzzle. This decline is especially salient in the financial sector, for bigger firms, and over longer time periods. Results are robust to using different exchange rates, a GARCH-SVAR approach to measure exchange rate exposure, and different return horizons.
Resumo:
Supermarket nutrient movement, a community food consumption measure, aggregated 1,023 high-fat foods, representing 100% of visible fats and approximately 44% of hidden fats in the food supply (FAO, 1980). Fatty acid and cholesterol content of foods shipped from the warehouse to 47 supermarkets located in the Houston area were calculated over a 6 month period. These stores were located in census tracts with over 50% of a given ethnicity: Hispanic, black non-Hispanic, or white non-Hispanic. Categorizing the supermarket census tracts by predominant ethnicity, significant differences were found by ANOVA in the proportion of specific fatty acids and cholesterol content of the foods examined. Using ecological regression, ethnicity, income, and median age predicted supermarket lipid movements while residential stability did not. No associations were found between lipid movements and cardiovascular disease mortality, making further validation necessary for epidemiological application of this method. However, it has been shown to be a non-reactive and cost-effective method appropriate for tracking target foods in populations of groups, and for assessing the impact of mass media nutrition education, legislation, and fortification on community food and nutrient purchase patterns. ^
Resumo:
Background. The United Nations' Millennium Development Goal (MDG) 4 aims for a two-thirds reduction in death rates for children under the age of five by 2015. The greatest risk of death is in the first week of life, yet most of these deaths can be prevented by such simple interventions as improved hygiene, exclusive breastfeeding, and thermal care. The percentage of deaths in Nigeria that occur in the first month of life make up 28% of all deaths under five years, a statistic that has remained unchanged despite various child health policies. This paper will address the challenges of reducing the neonatal mortality rate in Nigeria by examining the literature regarding efficacy of home-based, newborn care interventions and policies that have been implemented successfully in India. ^ Methods. I compared similarities and differences between India and Nigeria using qualitative descriptions and available quantitative data of various health indicators. The analysis included identifying policy-related factors and community approaches contributing to India's newborn survival rates. Databases and reference lists of articles were searched for randomized controlled trials of community health worker interventions shown to reduce neonatal mortality rates. ^ Results. While it appears that Nigeria spends more money than India on health per capita ($136 vs. $132, respectively) and as percent GDP (5.8% vs. 4.2%, respectively), it still lags behind India in its neonatal, infant, and under five mortality rates (40 vs. 32 deaths/1000 live births, 88 vs. 48 deaths/1000 live births, 143 vs. 63 deaths/1000 live births, respectively). Both countries have comparably low numbers of healthcare providers. Unlike their counterparts in Nigeria, Indian community health workers receive training on how to deliver postnatal care in the home setting and are monetarily compensated. Gender-related power differences still play a role in the societal structure of both countries. A search of randomized controlled trials of home-based newborn care strategies yielded three relevant articles. Community health workers trained to educate mothers and provide a preventive package of interventions involving clean cord care, thermal care, breastfeeding promotion, and danger sign recognition during multiple postnatal visits in rural India, Bangladesh, and Pakistan reduced neonatal mortality rates by 54%, 34%, and 15–20%, respectively. ^ Conclusion. Access to advanced technology is not necessary to reduce neonatal mortality rates in resource-limited countries. To address the urgency of neonatal mortality, countries with weak health systems need to start at the community level and invest in cost-effective, evidence-based newborn care interventions that utilize available human resources. While more randomized controlled studies are urgently needed, the current available evidence of models of postnatal care provision demonstrates that home-based care and health education provided by community health workers can reduce neonatal mortality rates in the immediate future.^
Resumo:
NPV is a static measure of project value which does not discriminate between levels of internal and external risk in project valuation. Due to current investment project?s characteristics, a much more complex model is needed: one that includes the value of flexibility and the different risk levels associated with variables subject to uncertainty (price, costs, exchange rates, grade and tonnage of the deposits, cut off grade, among many others). Few of these variables present any correlation or can be treated uniformly. In this context, Real Option Valuation (ROV) arose more than a decade ago, as a mainly theoretical model with the potential for simultaneous calculation of the risk associated with such variables. This paper reviews the literature regarding the application of Real Options Valuation in mining, noting the prior focus on external risks, and presents a case study where ROV is applied to quantify risk associated to mine planning.
Resumo:
In recent decades, full electric and hybrid electric vehicles have emerged as an alternative to conventional cars due to a range of factors, including environmental and economic aspects. These vehicles are the result of considerable efforts to seek ways of reducing the use of fossil fuel for vehicle propulsion. Sophisticated technologies such as hybrid and electric powertrains require careful study and optimization. Mathematical models play a key role at this point. Currently, many advanced mathematical analysis tools, as well as computer applications have been built for vehicle simulation purposes. Given the great interest of hybrid and electric powertrains, along with the increasing importance of reliable computer-based models, the author decided to integrate both aspects in the research purpose of this work. Furthermore, this is one of the first final degree projects held at the ETSII (Higher Technical School of Industrial Engineers) that covers the study of hybrid and electric propulsion systems. The present project is based on MBS3D 2.0, a specialized software for the dynamic simulation of multibody systems developed at the UPM Institute of Automobile Research (INSIA). Automobiles are a clear example of complex multibody systems, which are present in nearly every field of engineering. The work presented here benefits from the availability of MBS3D software. This program has proven to be a very efficient tool, with a highly developed underlying mathematical formulation. On this basis, the focus of this project is the extension of MBS3D features in order to be able to perform dynamic simulations of hybrid and electric vehicle models. This requires the joint simulation of the mechanical model of the vehicle, together with the model of the hybrid or electric powertrain. These sub-models belong to completely different physical domains. In fact the powertrain consists of energy storage systems, electrical machines and power electronics, connected to purely mechanical components (wheels, suspension, transmission, clutch…). The challenge today is to create a global vehicle model that is valid for computer simulation. Therefore, the main goal of this project is to apply co-simulation methodologies to a comprehensive model of an electric vehicle, where sub-models from different areas of engineering are coupled. The created electric vehicle (EV) model consists of a separately excited DC electric motor, a Li-ion battery pack, a DC/DC chopper converter and a multibody vehicle model. Co-simulation techniques allow car designers to simulate complex vehicle architectures and behaviors, which are usually difficult to implement in a real environment due to safety and/or economic reasons. In addition, multi-domain computational models help to detect the effects of different driving patterns and parameters and improve the models in a fast and effective way. Automotive designers can greatly benefit from a multidisciplinary approach of new hybrid and electric vehicles. In this case, the global electric vehicle model includes an electrical subsystem and a mechanical subsystem. The electrical subsystem consists of three basic components: electric motor, battery pack and power converter. A modular representation is used for building the dynamic model of the vehicle drivetrain. This means that every component of the drivetrain (submodule) is modeled separately and has its own general dynamic model, with clearly defined inputs and outputs. Then, all the particular submodules are assembled according to the drivetrain configuration and, in this way, the power flow across the components is completely determined. Dynamic models of electrical components are often based on equivalent circuits, where Kirchhoff’s voltage and current laws are applied to draw the algebraic and differential equations. Here, Randles circuit is used for dynamic modeling of the battery and the electric motor is modeled through the analysis of the equivalent circuit of a separately excited DC motor, where the power converter is included. The mechanical subsystem is defined by MBS3D equations. These equations consider the position, velocity and acceleration of all the bodies comprising the vehicle multibody system. MBS3D 2.0 is entirely written in MATLAB and the structure of the program has been thoroughly studied and understood by the author. MBS3D software is adapted according to the requirements of the applied co-simulation method. Some of the core functions are modified, such as integrator and graphics, and several auxiliary functions are added in order to compute the mathematical model of the electrical components. By coupling and co-simulating both subsystems, it is possible to evaluate the dynamic interaction among all the components of the drivetrain. ‘Tight-coupling’ method is used to cosimulate the sub-models. This approach integrates all subsystems simultaneously and the results of the integration are exchanged by function-call. This means that the integration is done jointly for the mechanical and the electrical subsystem, under a single integrator and then, the speed of integration is determined by the slower subsystem. Simulations are then used to show the performance of the developed EV model. However, this project focuses more on the validation of the computational and mathematical tool for electric and hybrid vehicle simulation. For this purpose, a detailed study and comparison of different integrators within the MATLAB environment is done. Consequently, the main efforts are directed towards the implementation of co-simulation techniques in MBS3D software. In this regard, it is not intended to create an extremely precise EV model in terms of real vehicle performance, although an acceptable level of accuracy is achieved. The gap between the EV model and the real system is filled, in a way, by introducing the gas and brake pedals input, which reflects the actual driver behavior. This input is included directly in the differential equations of the model, and determines the amount of current provided to the electric motor. For a separately excited DC motor, the rotor current is proportional to the traction torque delivered to the car wheels. Therefore, as it occurs in the case of real vehicle models, the propulsion torque in the mathematical model is controlled through acceleration and brake pedal commands. The designed transmission system also includes a reduction gear that adapts the torque coming for the motor drive and transfers it. The main contribution of this project is, therefore, the implementation of a new calculation path for the wheel torques, based on performance characteristics and outputs of the electric powertrain model. Originally, the wheel traction and braking torques were input to MBS3D through a vector directly computed by the user in a MATLAB script. Now, they are calculated as a function of the motor current which, in turn, depends on the current provided by the battery pack across the DC/DC chopper converter. The motor and battery currents and voltages are the solutions of the electrical ODE (Ordinary Differential Equation) system coupled to the multibody system. Simultaneously, the outputs of MBS3D model are the position, velocity and acceleration of the vehicle at all times. The motor shaft speed is computed from the output vehicle speed considering the wheel radius, the gear reduction ratio and the transmission efficiency. This motor shaft speed, somehow available from MBS3D model, is then introduced in the differential equations corresponding to the electrical subsystem. In this way, MBS3D and the electrical powertrain model are interconnected and both subsystems exchange values resulting as expected with tight-coupling approach.When programming mathematical models of complex systems, code optimization is a key step in the process. A way to improve the overall performance of the integration, making use of C/C++ as an alternative programming language, is described and implemented. Although this entails a higher computational burden, it leads to important advantages regarding cosimulation speed and stability. In order to do this, it is necessary to integrate MATLAB with another integrated development environment (IDE), where C/C++ code can be generated and executed. In this project, C/C++ files are programmed in Microsoft Visual Studio and the interface between both IDEs is created by building C/C++ MEX file functions. These programs contain functions or subroutines that can be dynamically linked and executed from MATLAB. This process achieves reductions in simulation time up to two orders of magnitude. The tests performed with different integrators, also reveal the stiff character of the differential equations corresponding to the electrical subsystem, and allow the improvement of the cosimulation process. When varying the parameters of the integration and/or the initial conditions of the problem, the solutions of the system of equations show better dynamic response and stability, depending on the integrator used. Several integrators, with variable and non-variable step-size, and for stiff and non-stiff problems are applied to the coupled ODE system. Then, the results are analyzed, compared and discussed. From all the above, the project can be divided into four main parts: 1. Creation of the equation-based electric vehicle model; 2. Programming, simulation and adjustment of the electric vehicle model; 3. Application of co-simulation methodologies to MBS3D and the electric powertrain subsystem; and 4. Code optimization and study of different integrators. Additionally, in order to deeply understand the context of the project, the first chapters include an introduction to basic vehicle dynamics, current classification of hybrid and electric vehicles and an explanation of the involved technologies such as brake energy regeneration, electric and non-electric propulsion systems for EVs and HEVs (hybrid electric vehicles) and their control strategies. Later, the problem of dynamic modeling of hybrid and electric vehicles is discussed. The integrated development environment and the simulation tool are also briefly described. The core chapters include an explanation of the major co-simulation methodologies and how they have been programmed and applied to the electric powertrain model together with the multibody system dynamic model. Finally, the last chapters summarize the main results and conclusions of the project and propose further research topics. In conclusion, co-simulation methodologies are applicable within the integrated development environments MATLAB and Visual Studio, and the simulation tool MBS3D 2.0, where equation-based models of multidisciplinary subsystems, consisting of mechanical and electrical components, are coupled and integrated in a very efficient way.
Resumo:
NMR investigations have been carried out of complexes between bovine chymotrypsin Aα and a series of four peptidyl trifluoromethyl ketones, listed here in order of increasing affinity for chymotrypsin: N-Acetyl-l-Phe-CF3, N-Acetyl-Gly-l-Phe-CF3, N-Acetyl-l-Val-l-Phe-CF3, and N-Acetyl-l-Leu-l-Phe-CF3. The D/H fractionation factors (φ) for the hydrogen in the H-bond between His 57 and Asp 102 (His 57-Hδ1) in these four complexes at 5°C were in the range φ = 0.32–0.43, expected for a low-barrier hydrogen bond. For this series of complexes, measurements also were made of the chemical shifts of His 57-Hɛ1 (δ2,2-dimethylsilapentane-5-sulfonic acid 8.97–9.18), the exchange rate of the His 57-Hδ1 proton with bulk water protons (284–12.4 s−1), and the activation enthalpies for this hydrogen exchange (14.7–19.4 kcal⋅mol−1). It was found that the previously noted correlations between the inhibition constants (Ki 170–1.2 μM) and the chemical shifts of His 57-Hδ1 (δ2,2-dimethylsilapentane-5-sulfonic acid 18.61–18.95) for this series of peptidyl trifluoromethyl ketones with chymotrypsin [Lin, J., Cassidy, C. S. & Frey, P. A. (1998) Biochemistry 37, 11940–11948] could be extended to include the fractionation factors, hydrogen exchange rates, and hydrogen exchange activation enthalpies. The results support the proposal of low barrier hydrogen bond-facilitated general base catalysis in the addition of Ser 195 to the peptidyl carbonyl group of substrates in the mechanism of chymotrypsin-catalyzed peptide hydrolysis. Trends in the enthalpies for hydrogen exchange and the fractionation factors are consistent with a strong, double-minimum or single-well potential hydrogen bond in the strongest complexes. The lifetimes of His 57-Hδ1, which is solvent shielded in these complexes, track the strength of the hydrogen bond. Because these lifetimes are orders of magnitude shorter than those of the complexes themselves, the enzyme must have a pathway for hydrogen exchange at this site that is independent of dissociation of the complexes.
Resumo:
Negli ultimi anni i modelli VAR sono diventati il principale strumento econometrico per verificare se può esistere una relazione tra le variabili e per valutare gli effetti delle politiche economiche. Questa tesi studia tre diversi approcci di identificazione a partire dai modelli VAR in forma ridotta (tra cui periodo di campionamento, set di variabili endogene, termini deterministici). Usiamo nel caso di modelli VAR il test di Causalità di Granger per verificare la capacità di una variabile di prevedere un altra, nel caso di cointegrazione usiamo modelli VECM per stimare congiuntamente i coefficienti di lungo periodo ed i coefficienti di breve periodo e nel caso di piccoli set di dati e problemi di overfitting usiamo modelli VAR bayesiani con funzioni di risposta di impulso e decomposizione della varianza, per analizzare l'effetto degli shock sulle variabili macroeconomiche. A tale scopo, gli studi empirici sono effettuati utilizzando serie storiche di dati specifici e formulando diverse ipotesi. Sono stati utilizzati tre modelli VAR: in primis per studiare le decisioni di politica monetaria e discriminare tra le varie teorie post-keynesiane sulla politica monetaria ed in particolare sulla cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015) e regola del GDP nominale in Area Euro (paper 1); secondo per estendere l'evidenza dell'ipotesi di endogeneità della moneta valutando gli effetti della cartolarizzazione delle banche sul meccanismo di trasmissione della politica monetaria negli Stati Uniti (paper 2); terzo per valutare gli effetti dell'invecchiamento sulla spesa sanitaria in Italia in termini di implicazioni di politiche economiche (paper 3). La tesi è introdotta dal capitolo 1 in cui si delinea il contesto, la motivazione e lo scopo di questa ricerca, mentre la struttura e la sintesi, così come i principali risultati, sono descritti nei rimanenti capitoli. Nel capitolo 2 sono esaminati, utilizzando un modello VAR in differenze prime con dati trimestrali della zona Euro, se le decisioni in materia di politica monetaria possono essere interpretate in termini di una "regola di politica monetaria", con specifico riferimento alla cosiddetta "nominal GDP targeting rule" (McCallum 1988 Hall e Mankiw 1994; Woodford 2012). I risultati evidenziano una relazione causale che va dallo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo alle variazioni dei tassi di interesse di mercato a tre mesi. La stessa analisi non sembra confermare l'esistenza di una relazione causale significativa inversa dalla variazione del tasso di interesse di mercato allo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo. Risultati simili sono stati ottenuti sostituendo il tasso di interesse di mercato con il tasso di interesse di rifinanziamento della BCE. Questa conferma di una sola delle due direzioni di causalità non supporta un'interpretazione della politica monetaria basata sulla nominal GDP targeting rule e dà adito a dubbi in termini più generali per l'applicabilità della regola di Taylor e tutte le regole convenzionali della politica monetaria per il caso in questione. I risultati appaiono invece essere più in linea con altri approcci possibili, come quelli basati su alcune analisi post-keynesiane e marxiste della teoria monetaria e più in particolare la cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015). Queste linee di ricerca contestano la tesi semplicistica che l'ambito della politica monetaria consiste nella stabilizzazione dell'inflazione, del PIL reale o del reddito nominale intorno ad un livello "naturale equilibrio". Piuttosto, essi suggeriscono che le banche centrali in realtà seguono uno scopo più complesso, che è il regolamento del sistema finanziario, con particolare riferimento ai rapporti tra creditori e debitori e la relativa solvibilità delle unità economiche. Il capitolo 3 analizza l’offerta di prestiti considerando l’endogeneità della moneta derivante dall'attività di cartolarizzazione delle banche nel corso del periodo 1999-2012. Anche se gran parte della letteratura indaga sulla endogenità dell'offerta di moneta, questo approccio è stato adottato raramente per indagare la endogeneità della moneta nel breve e lungo termine con uno studio degli Stati Uniti durante le due crisi principali: scoppio della bolla dot-com (1998-1999) e la crisi dei mutui sub-prime (2008-2009). In particolare, si considerano gli effetti dell'innovazione finanziaria sul canale dei prestiti utilizzando la serie dei prestiti aggiustata per la cartolarizzazione al fine di verificare se il sistema bancario americano è stimolato a ricercare fonti più economiche di finanziamento come la cartolarizzazione, in caso di politica monetaria restrittiva (Altunbas et al., 2009). L'analisi si basa sull'aggregato monetario M1 ed M2. Utilizzando modelli VECM, esaminiamo una relazione di lungo periodo tra le variabili in livello e valutiamo gli effetti dell’offerta di moneta analizzando quanto la politica monetaria influisce sulle deviazioni di breve periodo dalla relazione di lungo periodo. I risultati mostrano che la cartolarizzazione influenza l'impatto dei prestiti su M1 ed M2. Ciò implica che l'offerta di moneta è endogena confermando l'approccio strutturalista ed evidenziando che gli agenti economici sono motivati ad aumentare la cartolarizzazione per una preventiva copertura contro shock di politica monetaria. Il capitolo 4 indaga il rapporto tra spesa pro capite sanitaria, PIL pro capite, indice di vecchiaia ed aspettativa di vita in Italia nel periodo 1990-2013, utilizzando i modelli VAR bayesiani e dati annuali estratti dalla banca dati OCSE ed Eurostat. Le funzioni di risposta d'impulso e la scomposizione della varianza evidenziano una relazione positiva: dal PIL pro capite alla spesa pro capite sanitaria, dalla speranza di vita alla spesa sanitaria, e dall'indice di invecchiamento alla spesa pro capite sanitaria. L'impatto dell'invecchiamento sulla spesa sanitaria è più significativo rispetto alle altre variabili. Nel complesso, i nostri risultati suggeriscono che le disabilità strettamente connesse all'invecchiamento possono essere il driver principale della spesa sanitaria nel breve-medio periodo. Una buona gestione della sanità contribuisce a migliorare il benessere del paziente, senza aumentare la spesa sanitaria totale. Tuttavia, le politiche che migliorano lo stato di salute delle persone anziane potrebbe essere necessarie per una più bassa domanda pro capite dei servizi sanitari e sociali.
Resumo:
This essay compares the preferences of France, Italy, and Britain on the creation of the European Monetary System in 1978-1979, especially the Exchange Rate Mechanism, which stabilised nominal exchange rates. My claim is that the different conclusions reached by the governments (France and Italy in, Britain out) cannot be explained by economic circumstances or by interests, and I elaborate an intervening institutional variable which helps explain preferences. Deducing from spatial theory that where decisionmakers `sit' on the left-right spectrum matters to their position on the EMS, I argue that domestic constitutional power-. sharing mechanisms privilege certain actors over others in a predictable and consistent way. Where centrists were in power, the government's decision was to join. Where left or right extremists were privileged, the government's decision was negative. The article measures the centrism of the governments in place at the time, and also reviews the positions taken by the national political parties in and out of government. It is intended to contribute to the growing comparativist literature on the European Union, and to the burgeoning literature on EU-member-state relations.
Resumo:
"August 1995."
Resumo:
Foreign exchange trading has emerged recently as a significant activity in many countries. As with most forms of trading, the activity is influenced by many random parameters so that the creation of a system that effectively emulates the trading process will be very helpful. A major issue for traders in the deregulated Foreign Exchange Market is when to sell and when to buy a particular currency in order to maximize profit. This paper presents novel trading strategies based on the machine learning methods of genetic algorithms and reinforcement learning.