31 resultados para one-boson-exchange models
Resumo:
This work develops methods to account for shoot structure in models of coniferous canopy radiative transfer. Shoot structure, as it varies along the light gradient inside canopy, affects the efficiency of light interception per unit needle area, foliage biomass, or foliage nitrogen. The clumping of needles in the shoot volume also causes a notable amount of multiple scattering of light within coniferous shoots. The effect of shoot structure on light interception is treated in the context of canopy level photosynthesis and resource use models, and the phenomenon of within-shoot multiple scattering in the context of physical canopy reflectance models for remote sensing purposes. Light interception. A method for estimating the amount of PAR (Photosynthetically Active Radiation) intercepted by a conifer shoot is presented. The method combines modelling of the directional distribution of radiation above canopy, fish-eye photographs taken at shoot locations to measure canopy gap fraction, and geometrical measurements of shoot orientation and structure. Data on light availability, shoot and needle structure and nitrogen content has been collected from canopies of Pacific silver fir (Abies amabilis (Dougl.) Forbes) and Norway spruce (Picea abies (L.) Karst.). Shoot structure acclimated to light gradient inside canopy so that more shaded shoots have better light interception efficiency. Light interception efficiency of shoots varied about two-fold per needle area, about four-fold per needle dry mass, and about five-fold per nitrogen content. Comparison of fertilized and control stands of Norway spruce indicated that light interception efficiency is not greatly affected by fertilization. Light scattering. Structure of coniferous shoots gives rise to multiple scattering of light between the needles of the shoot. Using geometric models of shoots, multiple scattering was studied by photon tracing simulations. Based on simulation results, the dependence of the scattering coefficient of shoot from the scattering coefficient of needles is shown to follow a simple one-parameter model. The single parameter, termed the recollision probability, describes the level of clumping of the needles in the shoot, is wavelength independent, and can be connected to previously used clumping indices. By using the recollision probability to correct for the within-shoot multiple scattering, canopy radiative transfer models which have used leaves as basic elements can use shoots as basic elements, and thus be applied for coniferous forests. Preliminary testing of this approach seems to explain, at least partially, why coniferous forests appear darker than broadleaved forests in satellite data.
Resumo:
In cardiac myocytes (heart muscle cells), coupling of electric signal known as the action potential to contraction of the heart depends crucially on calcium-induced calcium release (CICR) in a microdomain known as the dyad. During CICR, the peak number of free calcium ions (Ca) present in the dyad is small, typically estimated to be within range 1-100. Since the free Ca ions mediate CICR, noise in Ca signaling due to the small number of free calcium ions influences Excitation-Contraction (EC) coupling gain. Noise in Ca signaling is only one noise type influencing cardiac myocytes, e.g., ion channels playing a central role in action potential propagation are stochastic machines, each of which gates more or less randomly, which produces gating noise present in membrane currents. How various noise sources influence macroscopic properties of a myocyte, how noise is attenuated and taken advantage of are largely open questions. In this thesis, the impact of noise on CICR, EC coupling and, more generally, macroscopic properties of a cardiac myocyte is investigated at multiple levels of detail using mathematical models. Complementarily to the investigation of the impact of noise on CICR, computationally-efficient yet spatially-detailed models of CICR are developed. The results of this thesis show that (1) gating noise due to the high-activity mode of L-type calcium channels playing a major role in CICR may induce early after-depolarizations associated with polymorphic tachycardia, which is a frequent precursor to sudden cardiac death in heart failure patients; (2) an increased level of voltage noise typically increases action potential duration and it skews distribution of action potential durations toward long durations in cardiac myocytes; and that (3) while a small number of Ca ions mediate CICR, Excitation-Contraction coupling is robust against this noise source, partly due to the shape of ryanodine receptor protein structures present in the cardiac dyad.
Resumo:
Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.
Resumo:
This doctoral thesis addresses the macroeconomic effects of real shocks in open economies in flexible exchange rate regimes. The first study of this thesis analyses the welfare effects of fiscal policy in a small open economy, where private and government consumption are substitutes in terms of private utility. The main findings are as follows: fiscal policy raises output, bringing it closer to its efficient level, but is not welfare-improving even though government spending directly affects private utility. The main reason for this is that the introduction of useful government spending implies a larger crowding-out effect on private consumption, when compared with the `pure waste' case. Utility decreases since one unit of government consumption yields less utility than one unit of private consumption. The second study of this thesis analyses the question of how the macroeconomic effects of fiscal policy in a small open economy depend on optimal intertemporal behaviour. The key result is that the effects of fiscal policy depend on the size of the elasticity of substitution between traded and nontraded goods. In particular, the sign of the current account response to fiscal policy depends on the interplay between the intertemporal elasticity of aggregate consumption and the elasticity of substitution between traded and nontraded goods. The third study analyses the consequences of productive government spending on the international transmission of fiscal policy. A standard result in the New Open Economy Macroeconomics literature is that a fiscal shock depreciates the exchange rate. I demonstrate that the response of the exchange rate depends on the productivity of government spending. If productivity is sufficiently high, a fiscal shock appreciates the exchange rate. It is also shown that the introduction of productive government spending increases both domestic and foreign welfare, when compared with the case where government spending is wasted. The fourth study analyses the question of how the international transmission of technology shocks depends on the specification of nominal rigidities. A growing body of empirical evidence suggests that a positive technology shock leads to a temporary decline in employment. In this study, I demonstrate that the open economy dimension can enhance the ability of sticky price models to account for the evidence. The reasoning is as follows. An improvement in technology appreciates the nominal exchange rate. Under producer-currency pricing, the exchange rate appreciation shifts global demand toward foreign goods away from domestic goods. This causes a temporary decline in domestic employment.
Resumo:
This licentiate's thesis analyzes the macroeconomic effects of fiscal policy in a small open economy under a flexible exchange rate regime, assuming that the government spends exclusively on domestically produced goods. The motivation for this research comes from the observation that the literature on the new open economy macroeconomics (NOEM) has focused almost exclusively on two-country global models and the analyses of the effects of fiscal policy on small economies are almost completely ignored. This thesis aims at filling in the gap in the NOEM literature and illustrates how the macroeconomic effects of fiscal policy in a small open economy depend on the specification of preferences. The research method is to present two theoretical model that are extensions to the model contained in the Appendix to Obstfeld and Rogoff (1995). The first model analyzes the macroeconomic effects of fiscal policy, making use of a model that exploits the idea of modelling private and government consumption as substitutes in private utility. The model offers intuitive predictions on how the effects of fiscal policy depend on the marginal rate of substitution between private and government consumption. The findings illustrate that the higher the substitutability between private and government consumption, (i) the bigger is the crowding out effect on private consumption (ii) and the smaller is the positive effect on output. The welfare analysis shows that the less fiscal policy decreases welfare the higher is the marginal rate of substitution between private and government consumption. The second model of this thesis studies how the macroeconomic effects of fiscal policy depend on the elasticity of substitution between traded and nontraded goods. This model reveals that this elasticity a key variable to explain the exchange rate, current account and output response to a permanent rise in government spending. Finally, the model demonstrates that temporary changes in government spending are an effective stabilization tool when used wisely and timely in response to undesired fluctuations in output. Undesired fluctuations in output can be perfectly offset by an opposite change in government spending without causing any side-effects.
Resumo:
Phytoplankton ecology and productivity is one of the main branches of contemporary oceanographic research. Research groups in this branch have increasingly started to utilise bio-optical applications. My main research objective was to critically investigate the advantages and deficiencies of the fast repetition rate (FRR) fluorometry for studies of productivity of phytoplankton, and the responses of phytoplankton towards varying environmental stress. Second, I aimed to clarify the applicability of the FRR system to the optical environment of the Baltic Sea. The FRR system offers a highly dynamic tool for studies of phytoplankton photophysiology and productivity both in the field and in a controlled environment. The FRR metrics obtain high-frequency in situ determinations of the light-acclimative and photosynthetic parameters of intact phytoplankton communities. The measurement protocol is relatively easy to use without phases requiring analytical determinations. The most notable application of the FRR system lies in its potential for making primary productivity (PP) estimations. However, the realisation of this scheme is not straightforward. The FRR-PP, based on the photosynthetic electron flow (PEF) rate, are linearly related to the photosynthetic gas exchange (fixation of 14C) PP only in environments where the photosynthesis is light-limited. If the light limitation is not present, as is usually the case in the near-surface layers of the water column, the two PP approaches will deviate. The prompt response of the PEF rate to the short-term variability in the natural light field makes the field comparisons between the PEF-PP and the 14C-PP difficult to interpret, because this variability is averaged out in the 14C-incubations. Furthermore, the FRR based PP models are tuned to closely follow the vertical pattern of the underwater irradiance. Due to the photoacclimational plasticity of phytoplankton, this easily leads to overestimates of water column PP, if precautionary measures are not taken. Natural phytoplankton is subject to broad-waveband light. Active non-spectral bio-optical instruments, like the FRR fluorometer, emit light in a relatively narrow waveband, which by its nature does not represent the in situ light field. Thus, the spectrally-dependent parameters provided by the FRR system need to be spectrally scaled to the natural light field of the Baltic Sea. In general, the requirement of spectral scaling in the water bodies under terrestrial impact concerns all light-adaptive parameters provided by any active non-spectral bio-optical technique. The FRR system can be adopted to studies of all phytoplankton that possess efficient light harvesting in the waveband matching the bluish FRR excitation. Although these taxa cover the large bulk of all the phytoplankton taxa, one exception with a pronounced ecological significance is found in the Baltic Sea. The FRR system cannot be used to monitor the photophysiology of the cyanobacterial taxa harvesting light in the yellow-red waveband. These taxa include the ecologically-significant bloom-forming cyanobacterial taxa in the Baltic Sea.
Resumo:
Exposure to water-damaged buildings and the associated health problems have evoked concern and created confusion during the past 20 years. Individuals exposed to moisture problem buildings report adverse health effects such as non-specific respiratory symptoms. Microbes, especially fungi, growing on the damp material have been considered as potential sources of the health problems encountered in these buildings. Fungi and their airborne fungal spores contain allergens and secondary metabolites which may trigger allergic as well as inflammatory types of responses in the eyes and airways. Although epidemiological studies have revealed an association between damp buildings and health problems, no direct cause-and-effect relationship has been established. Further knowledge is needed about the epidemiology and the mechanisms leading to the symptoms associated with exposure to fungi. Two different approaches have been used in this thesis in order to investigate the diverse health effects associated with exposure to moulds. In the first part, sensitization to moulds was evaluated and potential cross-reactivity studied in patients attending a hospital for suspected allergy. In the second part, one typical mould known to be found in water-damaged buildings and to produce toxic secondary metabolites was used to study the airway responses in an experimental model. Exposure studies were performed on both naive and allergen sensitized mice. The first part of the study showed that mould allergy is rare and highly dependent on the atopic status of the examined individual. The prevalence of sensitization was 2.7% to Cladosporium herbarum and 2.8% to Alternaria alternata in patients, the majority of whom were atopic subjects. Some of the patients sensitized to mould suffered from atopic eczema. Frequently the patients were observed to possess specific serum IgE antibodies to a yeast present in the normal skin flora, Pityrosporum ovale. In some of these patients, the IgE binding was partly found to be due to binding to shared glycoproteins in the mould and yeast allergen extracts. The second part of the study revealed that exposure to Stachybotrys chartarum spores induced an airway inflammation in the lungs of mice. The inflammation was characterized by an influx of inflammatory cells, mainly neutrophils and lymphocytes, into the lungs but with almost no differences in airway responses seen between the satratoxin producing and non-satratoxin producing strain. On the other hand, when mice were exposed to S. chartarum and sensitized/challenged with ovalbumin the extent of the inflammation was markedly enhanced. A synergistic increase in the numbers of inflammatory cells was seen in BAL and severe inflammation was observed in the histological lung sections. In conclusion, the results in this thesis imply that exposure to moulds in water damaged buildings may trigger health effects in susceptible individuals. The symptoms can rarely be explained by IgE mediated allergy to moulds. Other non-allergic mechanisms seem to be involved. Stachybotrys chartarum is one of the moulds potentially responsible for health problems. In this thesis, new reaction models for the airway inflammation induced by S. chartarum have been found using experimental approaches. The immunological status played an important role in the airway inflammation, enhancing the effects of mould exposure. The results imply that sensitized individuals may be more susceptible to exposure to moulds than non-sensitized individuals.
Resumo:
By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.
Resumo:
A search for a narrow diphoton mass resonance is presented based on data from 3.0 fb^{-1} of integrated luminosity from p-bar p collisions at sqrt{s} = 1.96 TeV collected by the CDF experiment. No evidence of a resonance in the diphoton mass spectrum is observed, and upper limits are set on the cross section times branching fraction of the resonant state as a function of Higgs boson mass. The resulting limits exclude Higgs bosons with masses below 106 GeV at a 95% Bayesian credibility level (C.L.) for one fermiophobic benchmark model.
Resumo:
We report on a search for the standard model Higgs boson produced in association with a $W$ or $Z$ boson in $p\bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV recorded by the CDF II experiment at the Tevatron in a data sample corresponding to an integrated luminosity of 2.1 fb$^{-1}$. We consider events which have no identified charged leptons, an imbalance in transverse momentum, and two or three jets where at least one jet is consistent with originating from the decay of a $b$ hadron. We find good agreement between data and predictions. We place 95% confidence level upper limits on the production cross section for several Higgs boson masses ranging from 110$\gevm$ to 150$\gevm$. For a mass of 115$\gevm$ the observed (expected) limit is 6.9 (5.6) times the standard model prediction.
Resumo:
A search for a narrow diphoton mass resonance is presented based on data from 3.0 fb^{-1} of integrated luminosity from p-bar p collisions at sqrt{s} = 1.96 TeV collected by the CDF experiment. No evidence of a resonance in the diphoton mass spectrum is observed, and upper limits are set on the cross section times branching fraction of the resonant state as a function of Higgs boson mass. The resulting limits exclude Higgs bosons with masses below 106 GeV at a 95% Bayesian credibility level (C.L.) for one fermiophobic benchmark model.
Resumo:
The question at issue in this dissertation is the epistemic role played by ecological generalizations and models. I investigate and analyze such properties of generalizations as lawlikeness, invariance, and stability, and I ask which of these properties are relevant in the context of scientific explanations. I will claim that there are generalizable and reliable causal explanations in ecology by generalizations, which are invariant and stable. An invariant generalization continues to hold or be valid under a special change called an intervention that changes the value of its variables. Whether a generalization remains invariant during its interventions is the criterion that determines whether it is explanatory. A generalization can be invariant and explanatory regardless of its lawlike status. Stability deals with a generality that has to do with holding of a generalization in possible background conditions. The more stable a generalization, the less dependent it is on background conditions to remain true. Although it is invariance rather than stability of generalizations that furnishes us with explanatory generalizations, there is an important function that stability has in this context of explanations, namely, stability furnishes us with extrapolability and reliability of scientific explanations. I also discuss non-empirical investigations of models that I call robustness and sensitivity analyses. I call sensitivity analyses investigations in which one model is studied with regard to its stability conditions by making changes and variations to the values of the model s parameters. As a general definition of robustness analyses I propose investigations of variations in modeling assumptions of different models of the same phenomenon in which the focus is on whether they produce similar or convergent results or not. Robustness and sensitivity analyses are powerful tools for studying the conditions and assumptions where models break down and they are especially powerful in pointing out reasons as to why they do this. They show which conditions or assumptions the results of models depend on. Key words: ecology, generalizations, invariance, lawlikeness, philosophy of science, robustness, explanation, models, stability
Resumo:
There is a lack of integrative conceptual models that would help to better understand the underlying reasons for the alleged problems of MBA education. To address this challenge, we draw on the work of Pierre Bourdieu to examine MBA education as an activity with its own ‘economy of exchange’ and ‘rules of the game.’ We argue that application of Bourdieu’s theoretical ideas elucidates three key issues in debate around MBA education: the outcomes of MBA programs, the inculcation of potentially problematic values and practices through the programs, and the potential of self-regulation such as accreditation and ranking for impeding development of MBA education. First, Bourdieu’s notions of capital – intellectual, social and symbolic – shed light on the ‘economy of exchange’ in MBA education. Critics of MBA programs have pointed out that the value of MBA degrees lies not only in ‘learning.’ Bourdieu’s framework allows further analysis of this issue by distinguishing between intellectual (learning), social (social networks), and symbolic capital (credentials and prestige). Second, the concept of ‘habitus’ suggests how values and practices are inculcated through MBA education. This process is often a ‘voluntary’ one where problematic or ethically questionable ideas may be regarded as natural. Third, Bourdieu’s reflections on the ‘doxa’ and its reproduction and legitimation illuminate the role of accreditation and ranking in MBA education. An analysis of such self-regulation explains in part how the system may turn out impeding change.
Resumo:
Ecology and evolutionary biology is the study of life on this planet. One of the many methods applied to answering the great diversity of questions regarding the lives and characteristics of individual organisms, is the utilization of mathematical models. Such models are used in a wide variety of ways. Some help us to reason, functioning as aids to, or substitutes for, our own fallible logic, thus making argumentation and thinking clearer. Models which help our reasoning can lead to conceptual clarification; by expressing ideas in algebraic terms, the relationship between different concepts become clearer. Other mathematical models are used to better understand yet more complicated models, or to develop mathematical tools for their analysis. Though helping us to reason and being used as tools in the craftmanship of science, many models do not tell us much about the real biological phenomena we are, at least initially, interested in. The main reason for this is that any mathematical model is a simplification of the real world, reducing the complexity and variety of interactions and idiosynchracies of individual organisms. What such models can tell us, however, both is and has been very valuable throughout the history of ecology and evolution. Minimally, a model simplifying the complex world can tell us that in principle, the patterns produced in a model could also be produced in the real world. We can never know how different a simplified mathematical representation is from the real world, but the similarity models do strive for, gives us confidence that their results could apply. This thesis deals with a variety of different models, used for different purposes. One model deals with how one can measure and analyse invasions; the expanding phase of invasive species. Earlier analyses claims to have shown that such invasions can be a regulated phenomena, that higher invasion speeds at a given point in time will lead to a reduction in speed. Two simple mathematical models show that analysis on this particular measure of invasion speed need not be evidence of regulation. In the context of dispersal evolution, two models acting as proof-of-principle are presented. Parent-offspring conflict emerges when there are different evolutionary optima for adaptive behavior for parents and offspring. We show that the evolution of dispersal distances can entail such a conflict, and that under parental control of dispersal (as, for example, in higher plants) wider dispersal kernels are optimal. We also show that dispersal homeostasis can be optimal; in a setting where dispersal decisions (to leave or stay in a natal patch) are made, strategies that divide their seeds or eggs into fractions that disperse or not, as opposed to randomized for each seed, can prevail. We also present a model of the evolution of bet-hedging strategies; evolutionary adaptations that occur despite their fitness, on average, being lower than a competing strategy. Such strategies can win in the long run because they have a reduced variance in fitness coupled with a reduction in mean fitness, and fitness is of a multiplicative nature across generations, and therefore sensitive to variability. This model is used for conceptual clarification; by developing a population genetical model with uncertain fitness and expressing genotypic variance in fitness as a product between individual level variance and correlations between individuals of a genotype. We arrive at expressions that intuitively reflect two of the main categorizations of bet-hedging strategies; conservative vs diversifying and within- vs between-generation bet hedging. In addition, this model shows that these divisions in fact are false dichotomies.
Resumo:
In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).