23 resultados para Upwind-biased
em Helda - Digital Repository of University of Helsinki
Resumo:
The present research focused on motivational and personality traits measuring individual differences in the experience of negative affect, in reactivity to negative events, and in the tendency to avoid threats. In this thesis, such traits (i.e., neuroticism and dispositional avoidance motivation) are jointly referred to as trait avoidance motivation. The seven studies presented here examined the moderators of such traits in predicting risk judgments, negatively biased processing, and adjustment. Given that trait avoidance motivation encompasses reactivity to negative events and tendency to avoid threats, it can be considered surprising that this trait does not seem to be related to risk judgments and that it seems to be inconsistently related to negatively biased information processing. Previous work thus suggests that some variable(s) moderate these relations. Furthermore, recent research has suggested that despite the close connection between trait avoidance motivation and (mal)adjustment, measures of cognitive performance may moderate this connection. However, it is unclear whether this moderation is due to different response processes between individuals with different cognitive tendencies or abilities, or to the genuinely buffering effect of high cognitive ability against the negative consequences of high trait avoidance motivation. Studies 1-3 showed that there is a modest direct relation between trait avoidance motivation and risk judgments, but studies 2-3 demonstrated that state motivation moderates this relation. In particular, individuals in an avoidance state made high risk judgments regardless of their level of trait avoidance motivation. This result explained the disparity between the theoretical conceptualization of avoidance motivation and the results of previous studies suggesting that the relation between trait avoidance motivation and risk judgments is weak or nonexistent. Studies 5-6 examined threat identification tendency as a moderator for the relationship between trait avoidance motivation and negatively biased processing. However, no evidence for such moderation was found. Furthermore, in line with previous work, the results of studies 5-6 suggested that trait avoidance motivation is inconsistently related to negatively biased processing, implying that theories concerning traits and information processing may need refining. Study 7 examined cognitive ability as a moderator for the relation between trait avoidance motivation and adjustment, and demonstrated that cognitive ability moderates the relation between trait avoidance motivation and indicators of both self-reported and objectively measured adjustment. Thus, the results of Study 7 supported the buffer explanation for the moderating influence of cognitive performance. To summarize, the results showed that it is possible to find factors that consistently moderate the relations between traits and important outcomes (e.g. adjustment). Identifying such factors and studying their interplay with traits is one of the most important goals of current personality research. The present thesis contributed to this line of work in relation to trait avoidance motivation.
Resumo:
Eutrophication of the Baltic Sea is a serious problem. This thesis estimates the benefit to Finns from reduced eutrophication in the Gulf of Finland, the most eutrophied part of the Baltic Sea, by applying the choice experiment method, which belongs to the family of stated preference methods. Because stated preference methods have been subject to criticism, e.g., due to their hypothetical survey context, this thesis contributes to the discussion by studying two anomalies that may lead to biased welfare estimates: respondent uncertainty and preference discontinuity. The former refers to the difficulty of stating one s preferences for an environmental good in a hypothetical context. The latter implies a departure from the continuity assumption of conventional consumer theory, which forms the basis for the method and the analysis. In the three essays of the thesis, discrete choice data are analyzed with the multinomial logit and mixed logit models. On average, Finns are willing to contribute to the water quality improvement. The probability for willingness increases with residential or recreational contact with the gulf, higher than average income, younger than average age, and the absence of dependent children in the household. On average, for Finns the relatively most important characteristic of water quality is water clarity followed by the desire for fewer occurrences of blue-green algae. For future nutrient reduction scenarios, the annual mean household willingness to pay estimates range from 271 to 448 and the aggregate welfare estimates for Finns range from 28 billion to 54 billion euros, depending on the model and the intensity of the reduction. Out of the respondents (N=726), 72.1% state in a follow-up question that they are either Certain or Quite certain about their answer when choosing the preferred alternative in the experiment. Based on the analysis of other follow-up questions and another sample (N=307), 10.4% of the respondents are identified as potentially having discontinuous preferences. In relation to both anomalies, the respondent- and questionnaire-specific variables are found among the underlying causes and a departure from standard analysis may improve the model fit and the efficiency of estimates, depending on the chosen modeling approach. The introduction of uncertainty about the future state of the Gulf increases the acceptance of the valuation scenario which may indicate an increased credibility of a proposed scenario. In conclusion, modeling preference heterogeneity is an essential part of the analysis of discrete choice data. The results regarding uncertainty in stating one s preferences and non-standard choice behavior are promising: accounting for these anomalies in the analysis may improve the precision of the estimates of benefit from reduced eutrophication in the Gulf of Finland.
Resumo:
This study evaluates how the advection of precipitation, or wind drift, between the radar volume and ground affects radar measurements of precipitation. Normally precipitation is assumed to fall vertically to the ground from the contributing volume, and thus the radar measurement represents the geographical location immediately below. In this study radar measurements are corrected using hydrometeor trajectories calculated from measured and forecasted winds, and the effect of trajectory-correction on the radar measurements is evaluated. Wind drift statistics for Finland are compiled using sounding data from two weather stations spanning two years. For each sounding, the hydrometeor phase at ground level is estimated and drift distance calculated using different originating level heights. This way the drift statistics are constructed as a function of range from radar and elevation angle. On average, wind drift of 1 km was exceeded at approximately 60 km distance, while drift of 10 km was exceeded at 100 km distance. Trajectories were calculated using model winds in order to produce a trajectory-corrected ground field from radar PPI images. It was found that at the upwind side from the radar the effective measuring area was reduced as some trajectories exited the radar volume scan. In the downwind side areas near the edge of the radar measuring area experience improved precipitation detection. The effect of trajectory-correction is most prominent in instant measurements and diminishes when accumulating over longer time periods. Furthermore, measurements of intensive and small scale precipitation patterns benefit most from wind drift correction. The contribution of wind drift on the uncertainty of estimated Ze (S) - relationship was studied by simulating the effect of different error sources to the uncertainty in the relationship coefficients a and b. The overall uncertainty was assumed to consist of systematic errors of both the radar and the gauge, as well as errors by turbulence at the gauge orifice and by wind drift of precipitation. The focus of the analysis is error associated with wind drift, which was determined by describing the spatial structure of the reflectivity field using spatial autocovariance (or variogram). This spatial structure was then used with calculated drift distances to estimate the variance in radar measurement produced by precipitation drift, relative to the other error sources. It was found that error by wind drift was of similar magnitude with error by turbulence at gauge orifice at all ranges from radar, with systematic errors of the instruments being a minor issue. The correction method presented in the study could be used in radar nowcasting products to improve the estimation of visibility and local precipitation intensities. The method however only considers pure snow, and for operational purposes some improvements are desirable, such as melting layer detection, VPR correction and taking solid state hydrometeor type into account, which would improve the estimation of vertical velocities of the hydrometeors.
Resumo:
In this paper both documentary and natural proxy data have been used to improve the accuracy of palaeoclimatic knowledge in Finland since the 18th century. Early meteorological observations from Turku (1748-1800) were analyzed first as a potential source of climate variability. The reliability of the calculated mean temperatures was evaluated by comparing them with those of contemporary temperature records from Stockholm, St. Petersburg and Uppsala. The resulting monthly, seasonal and yearly mean temperatures from 1748 to 1800 were compared with the present day mean values (1961-1990): the comparison suggests that the winters of the period 1749-1800 were 0.8 ºC colder than today, while the summers were 0.4 ºC warmer. Over the same period, springs were 0.9 ºC and autumns 0.1 ºC colder than today. Despite their uncertainties when compared with modern meteorological data, early temperature measurements offer direct and daily information about the weather for all months of the year, in contrast with other proxies. Secondly, early meteorological observations from Tornio (1737-1749) and Ylitornio (1792-1838) were used to study the temporal behaviour of the climate-tree growth relationship during the past three centuries in northern Finland. Analyses showed that the correlations between ring widths and mid-summer (July) temperatures did not vary significantly as a function of time. Early (June) and late summer (August) mean temperatures were secondary to mid-summer temperatures in controlling the radial growth. According the dataset used, there was no clear signature of temporally reduced sensitivity of Scots pine ring widths to mid-summer temperatures over the periods of early and modern meteorological observations. Thirdly, plant phenological data with tree-rings from south-west Finland since 1750 were examined as a palaeoclimate indicator. The information from the fragmentary, partly overlapping, partly nonsystematically biased plant phenological records of 14 different phenomena were combined into one continuous time series of phenological indices. The indices were found to be reliable indicators of the February to June temperature variations. In contrast, there was no correlation between the phenological indices and the precipitation data. Moreover, the correlations between the studied tree-rings and spring temperatures varied as a function of time and hence, their use in palaeoclimate reconstruction is questionable. The use of present tree-ring datasets for palaeoclimate purposes may become possible after the application of more sophisticated calibration methods. Climate variability since the 18th century is perhaps best seen in the fourth paper study of the multiproxy spring temperature reconstruction of south-west Finland. With the help of transfer functions, an attempt has been made to utilize both documentary and natural proxies. The reconstruction was verified with statistics showing a high degree of validity between the reconstructed and observed temperatures. According to the proxies and modern meteorological observations from Turku, springs have become warmer and have featured a warming trend since around the 1850s. Over the period of 1750 to around 1850, springs featured larger multidecadal low-frequency variability, as well as a smaller range of annual temperature variations. The coldest springtimes occurred around the 1840s and 1850s and the first decade of the 19th century. Particularly warm periods occurred in the 1760s, 1790s, 1820s, 1930s, 1970s and from 1987 onwards, although in this period cold springs occurred, such as the springs of 1994 and 1996. On the basis of the available material, long-term temperature changes have been related to changes in the atmospheric circulation, such as the North Atlantic Oscillation (February-June).
Resumo:
This dissertation consists of an introductory section and three theoretical essays analyzing the interaction of corporate governance and restructuring. The essays adopt an incomplete contracts approach and analyze the role of different institutional designs to facilitate the alignment of the objectives of shareholders and management (or employees) over the magnitude of corporate restructuring. The first essay analyzes how a firm's choice of production technology affects the employees' human capital investment. In the essay, the owners of the firm can choose between a specific and a general technology that both require a costly human capital investment by the employees. The specific technology is initially superior in using the human capital of employees but, in contrast to the general technology, it is not compatible with future innovations. As a result, anticipated changes in the specific technology diminish the ex ante incentives of the employees to invest in human capital unless the shareholders grant the employees specific governance mechanisms (a right of veto, severance pay) so as to protect their investments. The results of the first essay indicate that the level of protection that the shareholders are willing to offer falls short of the socially desirable one. Furthermore, when restructuring opportunities become more abundant, it becomes more attractive both socially and from the viewpoint of the shareholders to initially adopt the general technology. The second essay analyzes how the allocation of authority within the firm interacts with the owners' choice of business strategy when the ability of the owners to monitor the project proposals of the management is biased in favor of the status quo strategy. The essay shows that a bias in the monitoring ability will affect not only the allocation of authority within the firm but also the choice of business strategy. Especially, when delegation has positive managerial incentive effects, delegation turns out to be more attractive under the new business strategy because the improved managerial incentives are a way for the owners to compensate their own reduced information gathering ability. This effect, however, simultaneously makes the owners hesitant to switch the strategy since it would involve a more frequent loss of control over the project choice. Consequently, the owners' lack of knowledge of the new business strategy may lead to a suboptimal choice of strategy. The third essay analyzes the implications of CEO succession process for the ideal board structure. In this essay, the presence of the departing CEO on the board facilitates the ability of the board to find a matching successor and to counsel him. However, the ex-CEO's presence may simultaneously also weaken the ability of the board to restructure since the predecessor may use the opportunity to distort the successor's project choice. The results of the essay suggest that the extent of restructuring gains, the firm's ability to hire good outside directors and the importance of board's advisory role affect at which point and for how long the shareholders may want to nominate the predecessor to the board.
Resumo:
The dissertation consists of an introductory chapter and three essays that apply search-matching theory to study the interaction of labor market frictions, technological change and macroeconomic fluctuations. The first essay studies the impact of capital-embodied growth on equilibrium unemployment by extending a vintage capital/search model to incorporate vintage human capital. In addition to the capital obsolescence (or creative destruction) effect that tends to raise unemployment, vintage human capital introduces a skill obsolescence effect of faster growth that has the opposite sign. Faster skill obsolescence reduces the value of unemployment, hence wages and leads to more job creation and less job destruction, unambiguously reducing unemployment. The second essay studies the effect of skill biased technological change on skill mismatch and the allocation of workers and firms in the labor market. By allowing workers to invest in education, we extend a matching model with two-sided heterogeneity to incorporate an endogenous distribution of high and low skill workers. We consider various possibilities for the cost of acquiring skills and show that while unemployment increases in most scenarios, the effect on the distribution of vacancy and worker types varies according to the structure of skill costs. When the model is extended to incorporate endogenous labor market participation, we show that the unemployment rate becomes less informative of the state of the labor market as the participation margin absorbs employment effects. The third essay studies the effects of labor taxes on equilibrium labor market outcomes and macroeconomic dynamics in a New Keynesian model with matching frictions. Three policy instruments are considered: a marginal tax and a tax subsidy to produce tax progression schemes, and a replacement ratio to account for variability in outside options. In equilibrium, the marginal tax rate and replacement ratio dampen economic activity whereas tax subsidies boost the economy. The marginal tax rate and replacement ratio amplify shock responses whereas employment subsidies weaken them. The tax instruments affect the degree to which the wage absorbs shocks. We show that increasing tax progression when taxation is initially progressive is harmful for steady state employment and output, and amplifies the sensitivity of macroeconomic variables to shocks. When taxation is initially proportional, increasing progression is beneficial for output and employment and dampens shock responses.
Resumo:
Predicting evolutionary outcomes and reconstructing past evolutionary transitions are among the main goals of evolutionary biology. Ultimately, understanding the mechanisms of evolutionary change will also provide answers to the timely question of whether and how organisms will adapt to changing environmental conditions. In this thesis, I have investigated the relative roles of natural selection, random genetic drift and genetic correlations in the evolution of complex traits at different levels of organisation from populations to individuals. I have shown that natural selection has been the driving force behind body shape divergence of marine and freshwater threespine stickleback (Gasterosteus aculeatus) populations, while genetic drift may have played a significant role in the more fine scale divergence among isolated freshwater populations. These results are concurrent with the patterns that have emerged in the published studies comparing the relative importance of natural selection and genetic drift as explanations for population divergence in different traits and taxa. I have also shown that body shape and armour divergence among threespine stickleback populations is likely to be biased by the patterns of genetic variation and covariation. Body shape and armour variation along the most likely direction of evolution the direction of maximum genetic variance reflects the general patterns of variation observed wild populations across the distribution range of the threespine stickleback. Conversely, it appears that genetic correlations between the sexes have not imposed significant constraints on the evolution of sexual dimorphism in threespine stickleback body shape and armour. I have demonstrated that the patterns of evolution seen in the wild can be experimentally recreated to tease out the effects of different selection agents in detail. In addition, I have shown how important it is to take into account the correlative nature of traits, when making interpretations about the effects of natural selection on individual traits. Overall, this thesis provides a demonstration of how considering the relative roles of different mechanism of evolutionary change at different levels of organisation can aid in an emergence of a comprehensive picture of how adaptive divergence in wild populations occurs.
Resumo:
One major reason for the global decline of biodiversity is habitat loss and fragmentation. Conservation areas can be designed to reduce biodiversity loss, but as resources are limited, conservation efforts need to be prioritized in order to achieve best possible outcomes. The field of systematic conservation planning developed as a response to opportunistic approaches to conservation that often resulted in biased representation of biological diversity. The last two decades have seen the development of increasingly sophisticated methods that account for information about biodiversity conservation goals (benefits), economical considerations (costs) and socio-political constraints. In this thesis I focus on two general topics related to systematic conservation planning. First, I address two aspects of the question about how biodiversity features should be valued. (i) I investigate the extremely important but often neglected issue of differential prioritization of species for conservation. Species prioritization can be based on various criteria, and is always goal-dependent, but can also be implemented in a scientifically more rigorous way than what is the usual practice. (ii) I introduce a novel framework for conservation prioritization, which is based on continuous benefit functions that convert increasing levels of biodiversity feature representation to increasing conservation value using the principle that more is better. Traditional target-based systematic conservation planning is a special case of this approach, in which a step function is used for the benefit function. We have further expanded the benefit function framework for area prioritization to address issues such as protected area size and habitat vulnerability. In the second part of the thesis I address the application of community level modelling strategies to conservation prioritization. One of the most serious issues in systematic conservation planning currently is not the deficiency of methodology for selection and design, but simply the lack of data. Community level modelling offers a surrogate strategy that makes conservation planning more feasible in data poor regions. We have reviewed the available community-level approaches to conservation planning. These range from simplistic classification techniques to sophisticated modelling and selection strategies. We have also developed a general and novel community level approach to conservation prioritization that significantly improves on methods that were available before. This thesis introduces further degrees of realism into conservation planning methodology. The benefit function -based conservation prioritization framework largely circumvents the problematic phase of target setting, and allowing for trade-offs between species representation provides a more flexible and hopefully more attractive approach to conservation practitioners. The community-level approach seems highly promising and should prove valuable for conservation planning especially in data poor regions. Future work should focus on integrating prioritization methods to deal with multiple aspects in combination influencing the prioritization process, and further testing and refining the community level strategies using real, large datasets.
Resumo:
One of the main aims of evolutionary biology is to explain why organisms vary phenotypically as they do. Proximately, this variation arises from genetic differences and from environmental influences, the latter of which is referred to as phenotypic plasticity. Phenotypic plasticity is thus a central concept in evolutionary biology, and understanding its relative importance in causing the phenotypic variation and differentiation is important, for instance in anticipating the consequences of human induced environmental changes. The aim of this thesis was to study geographic variation and local adaptation, as well as sex ratios and environmental sex reversal, in the common frog (Rana temporaria). These themes cover three different aspects of phenotypic plasticity, which emerges as the central concept for the thesis. The first two chapters address geographic variation and local adaptation in two potentially thermally adaptive traits, namely the degree of melanism and the relative leg length. The results show that although there is an increasing latitudinal trend in the degree of melanism in wild populations across Scandinavian Peninsula, this cline has no direct genetic basis and is thus environmentally induced. The second chapter demonstrates that although there is no linear, latitudinally ordered phenotypic trend in relative leg length that would be expected under Allen s rule an ecogeographical rule linking extremity length to climatic conditions there seems to be such a trend at the genetic level, hidden under environmental effects. The first two chapters thus view phenotypic plasticity through its ecological role and evolution, and demonstrate that it can both give rise to phenotypic variation and hide evolutionary patterns in studies that focus solely on phenotypes. The last three chapters relate to phenotypic plasticity through its ecological and evolutionary role in sex determination, and consequent effects on population sex ratio, genetic recombination and the evolution of sex chromosomes. The results show that while sex ratios are strongly female biased and there is evidence of environmental sex reversals, these reversals are unlikely to have caused the sex ratio skew, at least directly. The results demonstrate that environmental sex reversal can have an effect on the evolution of sex chromosomes, as the recombination patterns between them seem to be controlled by phenotypic, rather than genetic, sex. This potentially allows Y chromosomes to recombine, lending support for the recent hypothesis suggesting that sex-reversal may play an important role on the rejuvenation of Y chromosomes.
Resumo:
Habitat fragmentation is currently affecting many species throughout the world. As a consequence, an increasing number of species are structured as metapopulations, i.e. as local populations connected by dispersal. While excellent studies of metapopulations have accumulated over the past 20 years, the focus has recently shifted from single species to studies of multiple species. This has created the concept of metacommunities, where local communities are connected by the dispersal of one or several of their member species. To understand this higher level of organisation, we need to address not only the properties of single species, but also establish the importance of interspecific interactions. However, studies of metacommunities are so far heavily biased towards laboratory-based systems, and empirical data from natural systems are urgently needed. My thesis focuses on a metacommunity of insect herbivores on the pedunculate oak Quercus robur a tree species known for its high diversity of host-specific insects. Taking advantage of the amenability of this system to both observational and experimental studies, I quantify and compare the importance of local and regional factors in structuring herbivore communities. Most importantly, I contrast the impact of direct and indirect competition, host plant genotype and local adaptation (i.e. local factors) to that of regional processes (as reflected by the spatial context of the local community). As a key approach, I use general theory to generate testable hypotheses, controlled experiments to establish causal relations, and observational data to validate the role played by the pinpointed processes in nature. As the central outcome of my thesis, I am able to relegate local forces to a secondary role in structuring oak-based insect communities. While controlled experiments show that direct competition does occur among both conspecifics and heterospecifics, that indirect interactions can be mediated by both the host plant and the parasitoids, and that host plant genotype may affect local adaptation, the size of these effects is much smaller than that of spatial context. Hence, I conclude that dispersal between habitat patches plays a prime role in structuring the insect community, and that the distribution and abundance of the target species can only be understood in a spatial framework. By extension, I suggest that the majority of herbivore communities are dependent on the spatial structure of their landscape and urge fellow ecologists working on other herbivore systems to either support or refute my generalization.
Resumo:
The geomagnetic field is one of the most fundamental geophysical properties of the Earth and has significantly contributed to our understanding of the internal structure of the Earth and its evolution. Paleomagnetic and paleointensity data have been crucial in shaping concepts like continental drift, magnetic reversals, as well as estimating the time when the Earth's core and associated geodynamo processes begun. The work of this dissertation is based on reliable Proterozoic and Holocene geomagnetic field intensity data obtained from rocks and archeological artifacts. New archeomagnetic field intensity results are presented for Finland, Estonia, Bulgaria, Italy and Switzerland. The data were obtained using sophisticated laboratory setups as well as various reliability checks and corrections. Inter-laboratory comparisons between three laboratories (Helsinki, Sofia and Liverpool) were performed in order to check the reliability of different paleointensity methods. The new intensity results fill up considerable gaps in the master curves for each region investigated. In order to interpret the paleointensity data of the Holocene period, a novel and user-friendly database (GEOMAGIA50) was constructed. This provided a new tool to independently test the reliability of various techniques and materials used in paleointensity determinations. The results show that archeological artifacts, if well fired, are the most suitable materials. Also lavas yield reliable paleointensity results, although they appear more scattered. This study also shows that reliable estimates are obtained using the Thellier methodology (and its modifications) with reliability checks. Global paleointensity curves during Paleozoic and Proterozoic have several time gaps with few or no intensity data. To define the global intensity behavior of the Earth's magnetic field during these times new rock types (meteorite impact rocks) were investigated. Two case histories are presented. The Ilyinets (Ukraine) impact melt rocks yielded a reliable paleointensity value at 440 Ma (Silurian), whereas the results from Jänisjärvi impact melts (Russian Karelia, ca. 700 Ma) might be biased towards high intensity values because of non-ideal magnetic mineralogy. The features of the geomagnetic field at 1.1 Ga are not well defined due to problems related to reversal asymmetries observed in Keweenawan data of the Lake Superior region. In this work new paleomagnetic, paleosecular variation and paleointensity results are reported from coeval diabases from Central Arizona and help understanding the asymmetry. The results confirm the earlier preliminary observations that the asymmetry is larger in Arizona than in Lake Superior area. Two of the mechanisms proposed to explain the asymmetry remain plausible: the plate motion and the non-dipole influence.
Resumo:
The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.
Resumo:
Silicon particle detectors are used in several applications and will clearly require better hardness against particle radiation in the future large scale experiments than can be provided today. To achieve this goal, more irradiation studies with defect generating bombarding particles are needed. Protons can be considered as important bombarding species, although neutrons and electrons are perhaps the most widely used particles in such irradiation studies. Protons provide unique possibilities, as their defect production rates are clearly higher than those of neutrons and electrons, and, their damage creation in silicon is most similar to the that of pions. This thesis explores the development and testing of an irradiation facility that provides the cooling of the detector and on-line electrical characterisation, such as current-voltage (IV) and capacitance-voltage (CV) measurements. This irradiation facility, which employs a 5-MV tandem accelerator, appears to function well, but some disadvantageous limitations are related to MeV-proton irradiation of silicon particle detectors. Typically, detectors are in non-operational mode during irradiation (i.e., without the applied bias voltage). However, in real experiments the detectors are biased; the ionising proton generates electron-hole pairs, and a rise in rate of proton flux may cause the detector to breakdown. This limits the proton flux for the irradiation of biased detectors. In this work, it is shown that, if detectors are irradiated and kept operational, the electric field decreases the introduction rate of negative space-charges and current-related damage. The effects of various particles with different energies are scaled to each others by the non-ionising energy loss (NIEL) hypothesis. The type of defects induced by irradiation depends on the energy used, and this thesis also discusses the minimum proton energy required at which the NIEL-scaling is valid.
Resumo:
Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.
Resumo:
In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.