16 resultados para Model-based bootstrap

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Anu Konttinen: Conducting Gestures Institutional and Educational Construction of Conductorship in Finland, 1973-1993. This doctoral thesis concentrates on those Finnish conductors who have participated in Professor Jorma Panula s conducting class at the Sibelius Academy during the years 1973 1993. The starting point was conducting as a myth, and the goal has been to find its practical opposite the practical core of the profession. What has been studied is whether one can theorise and analyse this core, and how. The theoretical goal has been to find out what kind of social construction conductorship is as a historical, sociological and practical phenomenon. In practical terms, this means taking the historical and social concept of a great conductor apart to look for the practical core gestural communication. The most important theoretical tool is the concept of gesture. The idea has been to sketch a theoretical model based on gestural communication between a conductor and an orchestra, and to give one example of the many possible ways of studying the gestures of a conductor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Breast cancer is the most common cancer in women in the western countries. Approximately two-thirds of breast cancer tumours are hormone dependent, requiring estrogens to grow. Estrogens are formed in the human body via a multistep route starting from cholesterol. The final steps in the biosynthesis include the CYP450 aromatase enzyme, converting the male hormones androgens (preferred substrate androstenedione ASD) into estrogens(estrone E1), and the 17beta-HSD1 enzyme, converting the biologically less active E1 into the active hormone 17beta-hydroxyestradiol E2. E2 is bound to the nuclear estrogen receptors causing a cascade of biochemical reactions leading to cell proliferation in normal tissue, and to tumour growth in cancer tissue. Aromatase and 17beta-HSD1 are expressed in or near the breast tumour, locally providing the tissue with estrogens. One approach in treating hormone dependent breast tumours is to block the local estrogen production by inhibiting these two enzymes. Aromatase inhibitors are already on the market in treating breast cancer, despite the lack of an experimentally solved structure. The structure of 17beta-HSD1, on the other hand, has been solved, but no commercial drugs have emerged from the drug discovery projects reported in the literature. Computer-assisted molecular modelling is an invaluable tool in modern drug design projects. Modelling techniques can be used to generate a model of the target protein and to design novel inhibitors for them even if the target protein structure is unknown. Molecular modelling has applications in predicting the activities of theoretical inhibitors and in finding possible active inhibitors from a compound database. Inhibitor binding at atomic level can also be studied with molecular modelling. To clarify the interactions between the aromatase enzyme and its substrate and inhibitors, we generated a homology model based on a mammalian CYP450 enzyme, rabbit progesterone 21-hydroxylase CYP2C5. The model was carefully validated using molecular dynamics simulations (MDS) with and without the natural substrate ASD. Binding orientation of the inhibitors was based on the hypothesis that the inhibitors coordinate to the heme iron, and were studied using MDS. The inhibitors were dietary phytoestrogens, which have been shown to reduce the risk for breast cancer. To further validate the model, the interactions of a commercial breast cancer drug were studied with MDS and ligand–protein docking. In the case of 17beta-HSD1, a 3D QSAR model was generated on the basis of MDS of an enzyme complex with active inhibitor and ligand–protein docking, employing a compound library synthesised in our laboratory. Furthermore, four pharmacophore hypotheses with and without a bound substrate or an inhibitor were developed and used in screening a commercial database of drug-like compounds. The homology model of aromatase showed stable behaviour in MDS and was capable of explaining most of the results from mutagenesis studies. We were able to identify the active site residues contributing to the inhibitor binding, and explain differences in coordination geometry corresponding to the inhibitory activity. Interactions between the inhibitors and aromatase were in agreement with the mutagenesis studies reported for aromatase. Simulations of 17beta-HSD1 with inhibitors revealed an inhibitor binding mode with hydrogen bond interactions previously not reported, and a hydrophobic pocket capable of accommodating a bulky side chain. Pharmacophore hypothesis generation, followed by virtual screening, was able to identify several compounds that can be used in lead compound generation. The visualisation of the interaction fields from the QSAR model and the pharmacophores provided us with novel ideas for inhibitor development in our drug discovery project.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Elucidating the mechanisms responsible for the patterns of species abundance, diversity, and distribution within and across ecological systems is a fundamental research focus in ecology. Species abundance patterns are shaped in a convoluted way by interplays between inter-/intra-specific interactions, environmental forcing, demographic stochasticity, and dispersal. Comprehensive models and suitable inferential and computational tools for teasing out these different factors are quite limited, even though such tools are critically needed to guide the implementation of management and conservation strategies, the efficacy of which rests on a realistic evaluation of the underlying mechanisms. This is even more so in the prevailing context of concerns over climate change progress and its potential impacts on ecosystems. This thesis utilized the flexible hierarchical Bayesian modelling framework in combination with the computer intensive methods known as Markov chain Monte Carlo, to develop methodologies for identifying and evaluating the factors that control the structure and dynamics of ecological communities. These methodologies were used to analyze data from a range of taxa: macro-moths (Lepidoptera), fish, crustaceans, birds, and rodents. Environmental stochasticity emerged as the most important driver of community dynamics, followed by density dependent regulation; the influence of inter-specific interactions on community-level variances was broadly minor. This thesis contributes to the understanding of the mechanisms underlying the structure and dynamics of ecological communities, by showing directly that environmental fluctuations rather than inter-specific competition dominate the dynamics of several systems. This finding emphasizes the need to better understand how species are affected by the environment and acknowledge species differences in their responses to environmental heterogeneity, if we are to effectively model and predict their dynamics (e.g. for management and conservation purposes). The thesis also proposes a model-based approach to integrating the niche and neutral perspectives on community structure and dynamics, making it possible for the relative importance of each category of factors to be evaluated in light of field data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ongoing habitat loss and fragmentation threaten much of the biodiversity that we know today. As such, conservation efforts are required if we want to protect biodiversity. Conservation budgets are typically tight, making the cost-effective selection of protected areas difficult. Therefore, reserve design methods have been developed to identify sets of sites, that together represent the species of conservation interest in a cost-effective manner. To be able to select reserve networks, data on species distributions is needed. Such data is often incomplete, but species habitat distribution models (SHDMs) can be used to link the occurrence of the species at the surveyed sites to the environmental conditions at these locations (e.g. climatic, vegetation and soil conditions). The probability of the species occurring at unvisited location is next predicted by the model, based on the environmental conditions of those sites. The spatial configuration of reserve networks is important, because habitat loss around reserves can influence the persistence of species inside the network. Since species differ in their requirements for network configuration, the spatial cohesion of networks needs to be species-specific. A way to account for species-specific requirements is to use spatial variables in SHDMs. Spatial SHDMs allow the evaluation of the effect of reserve network configuration on the probability of occurrence of the species inside the network. Even though reserves are important for conservation, they are not the only option available to conservation planners. To enhance or maintain habitat quality, restoration or maintenance measures are sometimes required. As a result, the number of conservation options per site increases. Currently available reserve selection tools do however not offer the ability to handle multiple, alternative options per site. This thesis extends the existing methodology for reserve design, by offering methods to identify cost-effective conservation planning solutions when multiple, alternative conservation options are available per site. Although restoration and maintenance measures are beneficial to certain species, they can be harmful to other species with different requirements. This introduces trade-offs between species when identifying which conservation action is best applied to which site. The thesis describes how the strength of such trade-offs can be identified, which is useful for assessing consequences of conservation decisions regarding species priorities and budget. Furthermore, the results of the thesis indicate that spatial SHDMs can be successfully used to account for species-specific requirements for spatial cohesion - in the reserve selection (single-option) context as well as in the multi-option context. Accounting for the spatial requirements of multiple species and allowing for several conservation options is however complicated, due to trade-offs in species requirements. It is also shown that spatial SHDMs can be successfully used for gaining information on factors that drive a species spatial distribution. Such information is valuable to conservation planning, as better knowledge on species requirements facilitates the design of networks for species persistence. This methods and results described in this thesis aim to improve species probabilities of persistence, by taking better account of species habitat and spatial requirements. Many real-world conservation planning problems are characterised by a variety of conservation options related to protection, restoration and maintenance of habitat. Planning tools therefore need to be able to incorporate multiple conservation options per site, in order to continue the search for cost-effective conservation planning solutions. Simultaneously, the spatial requirements of species need to be considered. The methods described in this thesis offer a starting point for combining these two relevant aspects of conservation planning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The time of the large sequencing projects has enabled unprecedented possibilities of investigating more complex aspects of living organisms. Among the high-throughput technologies based on the genomic sequences, the DNA microarrays are widely used for many purposes, including the measurement of the relative quantity of the messenger RNAs. However, the reliability of microarrays has been strongly doubted as robust analysis of the complex microarray output data has been developed only after the technology had already been spread in the community. An objective of this study consisted of increasing the performance of microarrays, and was measured by the successful validation of the results by independent techniques. To this end, emphasis has been given to the possibility of selecting candidate genes with remarkable biological significance within specific experimental design. Along with literature evidence, the re-annotation of the probes and model-based normalization algorithms were found to be beneficial when analyzing Affymetrix GeneChip data. Typically, the analysis of microarrays aims at selecting genes whose expression is significantly different in different conditions followed by grouping them in functional categories, enabling a biological interpretation of the results. Another approach investigates the global differences in the expression of functionally related groups of genes. Here, this technique has been effective in discovering patterns related to temporal changes during infection of human cells. Another aspect explored in this thesis is related to the possibility of combining independent gene expression data for creating a catalog of genes that are selectively expressed in healthy human tissues. Not all the genes present in human cells are active; some involved in basic activities (named housekeeping genes) are expressed ubiquitously. Other genes (named tissue-selective genes) provide more specific functions and they are expressed preferably in certain cell types or tissues. Defining the tissue-selective genes is also important as these genes can cause disease with phenotype in the tissues where they are expressed. The hypothesis that gene expression could be used as a measure of the relatedness of the tissues has been also proved. Microarray experiments provide long lists of candidate genes that are often difficult to interpret and prioritize. Extending the power of microarray results is possible by inferring the relationships of genes under certain conditions. Gene transcription is constantly regulated by the coordinated binding of proteins, named transcription factors, to specific portions of the its promoter sequence. In this study, the analysis of promoters from groups of candidate genes has been utilized for predicting gene networks and highlighting modules of transcription factors playing a central role in the regulation of their transcription. Specific modules have been found regulating the expression of genes selectively expressed in the hippocampus, an area of the brain having a central role in the Major Depression Disorder. Similarly, gene networks derived from microarray results have elucidated aspects of the development of the mesencephalon, another region of the brain involved in Parkinson Disease.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several excited states of Ds and Bs mesons have been discovered in the last six years: BaBar, Cleo and Belle discovered the very narrow states D(s0)*(2317)+- and D(s1)(2460)+- in 2003, and CDF and DO Collaborations reported the observation of two narrow Bs resonances, B(s1)(5830)0 and B*(s2)(5840)0 in 2007. To keep up with experiment, meson excited states should be studied from the theoretical aspect as well. The theory that describes the interaction between quarks and gluons is quantum chromodynamics (QCD). In this thesis the properties of the meson states are studied using the discretized version of the theory - lattice QCD. This allows us to perform QCD calculations from first principles, and "measure" not just energies but also the radial distributions of the states on the lattice. This gives valuable theoretical information on the excited states, as we can extract the energy spectrum of a static-light meson up to D wave states (states with orbital angular momentum L=2). We are thus able to predict where some of the excited meson states should lie. We also pay special attention to the order of the states, to detect possible inverted spin multiplets in the meson spectrum, as predicted by H. Schnitzer in 1978. This inversion is connected to the confining potential of the strong interaction. The lattice simulations can also help us understand the strong interaction better, as the lattice data can be treated as "experimental" data and used in testing potential models. In this thesis an attempt is made to explain the energies and radial distributions in terms of a potential model based on a one-body Dirac equation. The aim is to get more information about the nature of the confining potential, as well as to test how well the one-gluon exchange potential explains the short range part of the interaction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays any analysis of Russian economy is incomplete without taking into account the phenomenon of oligarchy. Russian oligarchs appeared after the fall of the Soviet Union and are represented by wealthy businessmen who control a huge part of natural resources enterprises and have a big political influence. Oligarchs’ shares in some natural resources industries reach even 70-80%. Their role in Russian economy is big without any doubts, however there has been very little economic analysis done. The aim of this work is to examine Russian oligarchy on micro and macro levels, its role in Russia’s transition and the possible positive and negative outcomes from this phenomenon. For this purpose the work presents two theoretical models. The first part of this thesis work examines the role of oligarchs on micro level, concentrating on the question whether the oligarchs can be more productive owners than other types of owners. To answer the question this part presents a model based on the article “Are oligarchs productive? Theory and evidence” by Y. Gorodnichenko and Y. Grygorenko. It is followed by empirical test based on the works of S. Guriev and A. Rachinsky. The model predicts oligarchs to invest more in the productivity of their enterprises and have higher returns on capital, therefore be more productive owners. According to the empirical test, oligarchs were found to outperform other types of owners, however it is not defined whether the productivity gains offset losses in tax revenue. The second part of the work concentrates on the role of oligarchy on macro level. More precisely, it examines the assumption that the depression after 1998 crises in Russia was caused by the oligarchs’ behavior. This part presents a theoretical model based on the article “A macroeconomic model of Russian transition: The role of oligarchic property rights” by S. Braguinsky and R. Myerson, where the special type of property rights is introduced. After the 1998 crises oligarchs started to invest all their resources abroad to protect themselves from political risks, which resulted in the long depression phase. The macroeconomic model shows, that better protection of property rights (smaller political risk) or/and higher outside investing could reduce the depression. Taking into account this result, the government policy can change the oligarchs’ behavior to be more beneficial for the Russian economy and make the transition faster.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nanomaterials with a hexagonally ordered atomic structure, e.g., graphene, carbon and boron nitride nanotubes, and white graphene (a monolayer of hexagonal boron nitride) possess many impressive properties. For example, the mechanical stiffness and strength of these materials are unprecedented. Also, the extraordinary electronic properties of graphene and carbon nanotubes suggest that these materials may serve as building blocks of next generation electronics. However, the properties of pristine materials are not always what is needed in applications, but careful manipulation of their atomic structure, e.g., via particle irradiation can be used to tailor the properties. On the other hand, inadvertently introduced defects can deteriorate the useful properties of these materials in radiation hostile environments, such as outer space. In this thesis, defect production via energetic particle bombardment in the aforementioned materials is investigated. The effects of ion irradiation on multi-walled carbon and boron nitride nanotubes are studied experimentally by first conducting controlled irradiation treatments of the samples using an ion accelerator and subsequently characterizing the induced changes by transmission electron microscopy and Raman spectroscopy. The usefulness of the characterization methods is critically evaluated and a damage grading scale is proposed, based on transmission electron microscopy images. Theoretical predictions are made on defect production in graphene and white graphene under particle bombardment. A stochastic model based on first-principles molecular dynamics simulations is used together with electron irradiation experiments for understanding the formation of peculiar triangular defect structures in white graphene. An extensive set of classical molecular dynamics simulations is conducted, in order to study defect production under ion irradiation in graphene and white graphene. In the experimental studies the response of carbon and boron nitride multi-walled nanotubes to irradiation with a wide range of ion types, energies and fluences is explored. The stabilities of these structures under ion irradiation are investigated, as well as the issue of how the mechanism of energy transfer affects the irradiation-induced damage. An irradiation fluence of 5.5x10^15 ions/cm^2 with 40 keV Ar+ ions is established to be sufficient to amorphize a multi-walled nanotube. In the case of 350 keV He+ ion irradiation, where most of the energy transfer happens through inelastic collisions between the ion and the target electrons, an irradiation fluence of 1.4x10^17 ions/cm^2 heavily damages carbon nanotubes, whereas a larger irradiation fluence of 1.2x10^18 ions/cm^2 leaves a boron nitride nanotube in much better condition, indicating that carbon nanotubes might be more susceptible to damage via electronic excitations than their boron nitride counterparts. An elevated temperature was discovered to considerably reduce the accumulated damage created by energetic ions in both carbon and boron nitride nanotubes, attributed to enhanced defect mobility and efficient recombination at high temperatures. Additionally, cobalt nanorods encapsulated inside multi-walled carbon nanotubes were observed to transform into spherical nanoparticles after ion irradiation at an elevated temperature, which can be explained by the inverse Ostwald ripening effect. The simulation studies on ion irradiation of the hexagonal monolayers yielded quantitative estimates on types and abundances of defects produced within a large range of irradiation parameters. He, Ne, Ar, Kr, Xe, and Ga ions were considered in the simulations with kinetic energies ranging from 35 eV to 10 MeV, and the role of the angle of incidence of the ions was studied in detail. A stochastic model was developed for utilizing the large amount of data produced by the molecular dynamics simulations. It was discovered that a high degree of selectivity over the types and abundances of defects can be achieved by carefully selecting the irradiation parameters, which can be of great use when precise pattering of graphene or white graphene using focused ion beams is planned.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays any analysis of Russian economy is incomplete without taking into account the phenomenon of oligarchy. Russian oligarchs appeared after the fall of the Soviet Union and are represented by wealthy businessmen who control a huge part of natural resources enterprises and have a big political influence. Oligarchs’ shares in some natural resources industries reach even 70-80%. Their role in Russian economy is big without any doubts, however there has been very little economic analysis done. The aim of this work is to examine Russian oligarchy on micro and macro levels, its role in Russia’s transition and the possible positive and negative outcomes from this phenomenon. For this purpose the work presents two theoretical models. The first part of this thesis work examines the role of oligarchs on micro level, concentrating on the question whether the oligarchs can be more productive owners than other types of owners. To answer the question this part presents a model based on the article “Are oligarchs productive? Theory and evidence” by Y. Gorodnichenko and Y. Grygorenko. It is followed by empirical test based on the works of S. Guriev and A. Rachinsky. The model predicts oligarchs to invest more in the productivity of their enterprises and have higher returns on capital, therefore be more productive owners. According to the empirical test, oligarchs were found to outperform other types of owners, however it is not defined whether the productivity gains offset losses in tax revenue. The second part of the work concentrates on the role of oligarchy on macro level. More precisely, it examines the assumption that the depression after 1998 crises in Russia was caused by the oligarchs’ behavior. This part presents a theoretical model based on the article “A macroeconomic model of Russian transition: The role of oligarchic property rights” by S. Braguinsky and R. Myerson, where the special type of property rights is introduced. After the 1998 crises oligarchs started to invest all their resources abroad to protect themselves from political risks, which resulted in the long depression phase. The macroeconomic model shows, that better protection of property rights (smaller political risk) or/and higher outside investing could reduce the depression. Taking into account this result, the government policy can change the oligarchs’ behavior to be more beneficial for the Russian economy and make the transition faster.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Paraserianthes falcataria is a very fast growing, light wood tree species, that has recently gained wide interest in Indonesia for industrial wood processing. At the moment the P. falcataria plantations managed by smallholders are lacking predefined management programmes for commercial wood production. The general objective of this study was to model the growth and yield of Paraserianthes falcataria stands managed by smallholders in Ciamis, West Java, Indonesia and to develop management scenarios for different production objectives. In total 106 circular sample plots with over 2300 P. falcataria trees were assessed on smallholder plantation inventory. In addition, information on market prices of P. falcataria wood was collected through rapid appraisals among industries. A tree growth model based on Chapman-Richards function was developed on three different site qualities and the stand management scenarios were developed under three management objectives: (1) low initial stand density with low intensity stand management, (2) high initial stand density with medium intensity of intervention, (3) high initial stand density and strong intensity of silvicultural interventions, repeated more than once. In general, the 9 recommended scenarios have rotation ages varying from 4 to 12 years, planting densities from 4x4 meters (625 trees ha-1) to 3x2 meters (1666 trees ha-1) and thinnings at intensities of removing 30 to 60 % of the standing trees. The highest annual income would be generated on high-quality with a scenario with initial planting density 3x2 m (1666 trees ha-1) one thinning at intensity of removing 55 % of the standing trees at the age of 2 years and clear cut at the age of 4 years.