14 resultados para errors-in-variables model
em Helda - Digital Repository of University of Helsinki
Resumo:
This thesis studies empirically whether measurement errors in aggregate production statistics affect sentiment and future output. Initial announcements of aggregate production are subject to measurement error, because many of the data required to compile the statistics are produced with a lag. This measurement error can be gauged as the difference between the latest revised statistic and its initial announcement. Assuming aggregate production statistics help forecast future aggregate production, these measurement errors are expected to affect macroeconomic forecasts. Assuming agents’ macroeconomic forecasts affect their production choices, these measurement errors should affect future output through sentiment. This thesis is primarily empirical, so the theoretical basis, strategic complementarity, is discussed quite briefly. However, it is a model in which higher aggregate production increases each agent’s incentive to produce. In this circumstance a statistical announcement which suggests aggregate production is high would increase each agent’s incentive to produce, thus resulting in higher aggregate production. In this way the existence of strategic complementarity provides the theoretical basis for output fluctuations caused by measurement mistakes in aggregate production statistics. Previous empirical studies suggest that measurement errors in gross national product affect future aggregate production in the United States. Additionally it has been demonstrated that measurement errors in the Index of Leading Indicators affect forecasts by professional economists as well as future industrial production in the United States. This thesis aims to verify the applicability of these findings to other countries, as well as study the link between measurement errors in gross domestic product and sentiment. This thesis explores the relationship between measurement errors in gross domestic production and sentiment and future output. Professional forecasts and consumer sentiment in the United States and Finland, as well as producer sentiment in Finland, are used as the measures of sentiment. Using statistical techniques it is found that measurement errors in gross domestic product affect forecasts and producer sentiment. The effect on consumer sentiment is ambiguous. The relationship between measurement errors and future output is explored using data from Finland, United States, United Kingdom, New Zealand and Sweden. It is found that measurement errors have affected aggregate production or investment in Finland, United States, United Kingdom and Sweden. Specifically, it was found that overly optimistic statistics announcements are associated with higher output and vice versa.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).
Resumo:
Heart failure is a common and highly challenging medical disorder. The progressive increase of elderly population is expected to further reflect in heart failure incidence. Recent progress in cell transplantation therapy has provided a conceptual alternative for treatment of heart failure. Despite improved medical treatment and operative possibilities, end-stage coronary artery disease present a great medical challenge. It has been estimated that therapeutic angiogenesis would be the next major advance in the treatment of ischaemic heart disease. Gene transfer to augment neovascularization could be beneficial for such patients. We employed a porcine model to evaluate the angiogenic effect of vascular endothelial growth factor (VEGF)-C gene transfer. Ameroid-generated myocardial ischemia was produced and adenovirus encoding (ad)VEGF-C or β-galactosidase (LacZ) gene therapy was given intramyocardially during progressive coronary stenosis. Angiography, positron emission tomography (PET), single photon emission computed tomography (SPECT) and histology evidenced beneficial affects of the adVEGF-C gene transfer compared to adLacZ. The myocardial deterioration during progressive coronary stenosis seen in the control group was restrained in the treatment group. We observed an uneven occlusion rate of the coronary vessels with Ameroid constrictor. We developed a simple methodological improvement of Ameroid model by ligating of the Ameroid–stenosed coronary vessel. Improvement of the model was seen by a more reliable occlusion rate of the vessel concerned and a formation of a rather constant myocardial infarction. We assessed the spontaneous healing of the left ventricle (LV) in this new model by SPECT, PET, MRI, and angiography. Significant spontaneous improvement of myocardial perfusion and function was seen as well as diminishment of scar volume. Histologically more microvessels were seen in the border area of the lesion. Double staining of the myocytes in mitosis indicated more cardiomyocyte regeneration at the remote area of the lesion. The potential of autologous myoblast transplantation after ischaemia and infarction of porcine heart was evaluated. After ligation of stenosed coronary artery, autologous myoblast transplantation or control medium was directly injected into the myocardium at the lesion area. Assessed by MRI, improvement of diastolic function was seen in the myoblast-transplanted animals, but not in the control animals. Systolic function remained unchanged in both groups.
Cox-2, tenascin, CRP, and ingraft chimerism in a model of post-transplant obliterative bronchiolitis
Resumo:
Chronic rejection in the form of obliterative bronchiolitis (OB) is the major cause of death 5 years after lung transplantation. The exact mechanism of OB remains unclear. This study focused on the role of cyclo-oxygenase (COX) -2, tenascin, and C-reactive protein (CRP) expression, and the occurrence of ingraft chimerism (= cells from two genetically distinct individuals in a same individual) in post-transplant OB development. In our porcine model, OB developed invariably in allografts, while autografts stayed patent. The histological changes were similar to those seen in human OB. In order to delay or prevent obliteration, animals were medicated according to certain protocol. In the beginning of the bronchial allograft reaction, COX-2 induction occurred in airway epithelial cells prior to luminal obliteration. COX-2 expression in macrophages and fibroblasts paralleled the onset of inflammation and fibroblast proliferation. This study demonstrated for the first time, that COX-2 expression is associated with the early stage of post- transplant obliterative airway disease. Tenascin expression in the respiratory epithelium appeared to be predictive of histologic features observed in human OB, and influx of immune cells. Expression in the bronchial wall and in the early obliterative lesions coincided with the onset of onset of fibroblast and inflammatory cell proliferation in the early stage of OB and was predictive of further influx of inflammatory and immune cells. CRP expression in the bronchial wall coincided with the remodelling process. High grade of bronchial wall CRP staining intensity predicted inflammation, accelerated fibroproliferation, and luminal obliteration, which are all features of OB. In the early obliterative plaque, majority of cells expressed CRP, but in mature, collagen-rich plaque, expression declined. Local CRP expression might be a response to inflammation and it might promote the development of OB. Early appearance of chimeric (= recipient-derived) cells in the graft airway epithelium predicted epithelial cell injury and obliteration of the bronchial lumen, which both are features of OB. Chimeric cells appeared in the airway epithelium after repair following transplantation-induced ischemic injury. Ingraft chimerism might be a mechanism to repair alloimmune-mediated tissue injury and to protect allografts from rejection after transplantation. The results of this study indicate, that COX-2, tenascin, CRP, and ingraft chimerism have a role in OB development. These findings increase the understanding of the mechanisms of OB, which may be beneficial in further development of diagnostic options.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
Numerical weather prediction (NWP) models provide the basis for weather forecasting by simulating the evolution of the atmospheric state. A good forecast requires that the initial state of the atmosphere is known accurately, and that the NWP model is a realistic representation of the atmosphere. Data assimilation methods are used to produce initial conditions for NWP models. The NWP model background field, typically a short-range forecast, is updated with observations in a statistically optimal way. The objective in this thesis has been to develope methods in order to allow data assimilation of Doppler radar radial wind observations. The work has been carried out in the High Resolution Limited Area Model (HIRLAM) 3-dimensional variational data assimilation framework. Observation modelling is a key element in exploiting indirect observations of the model variables. In the radar radial wind observation modelling, the vertical model wind profile is interpolated to the observation location, and the projection of the model wind vector on the radar pulse path is calculated. The vertical broadening of the radar pulse volume, and the bending of the radar pulse path due to atmospheric conditions are taken into account. Radar radial wind observations are modelled within observation errors which consist of instrumental, modelling, and representativeness errors. Systematic and random modelling errors can be minimized by accurate observation modelling. The impact of the random part of the instrumental and representativeness errors can be decreased by calculating spatial averages from the raw observations. Model experiments indicate that the spatial averaging clearly improves the fit of the radial wind observations to the model in terms of observation minus model background (OmB) standard deviation. Monitoring the quality of the observations is an important aspect, especially when a new observation type is introduced into a data assimilation system. Calculating the bias for radial wind observations in a conventional way can result in zero even in case there are systematic differences in the wind speed and/or direction. A bias estimation method designed for this observation type is introduced in the thesis. Doppler radar radial wind observation modelling, together with the bias estimation method, enables the exploitation of the radial wind observations also for NWP model validation. The one-month model experiments performed with the HIRLAM model versions differing only in a surface stress parameterization detail indicate that the use of radar wind observations in NWP model validation is very beneficial.
Resumo:
The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.
Resumo:
The forest simulator is a computerized model for predicting forest growth and future development as well as effects of forest harvests and treatments. The forest planning system is a decision support tool, usually including a forest simulator and an optimisation model, for finding the optimal forest management actions. The information produced by forest simulators and forest planning systems is used for various analytical purposes and in support of decision making. However, the quality and reliability of this information can often be questioned. Natural variation in forest growth and estimation errors in forest inventory, among other things, cause uncertainty in predictions of forest growth and development. This uncertainty stemming from different sources has various undesirable effects. In many cases outcomes of decisions based on uncertain information are something else than desired. The objective of this thesis was to study various sources of uncertainty and their effects in forest simulators and forest planning systems. The study focused on three notable sources of uncertainty: errors in forest growth predictions, errors in forest inventory data, and stochastic fluctuation of timber assortment prices. Effects of uncertainty were studied using two types of forest growth models, individual tree-level models and stand-level models, and with various error simulation methods. New method for simulating more realistic forest inventory errors was introduced and tested. Also, three notable sources of uncertainty were combined and their joint effects on stand-level net present value estimates were simulated. According to the results, the various sources of uncertainty can have distinct effects in different forest growth simulators. The new forest inventory error simulation method proved to produce more realistic errors. The analysis on the joint effects of various sources of uncertainty provided interesting knowledge about uncertainty in forest simulators.
Resumo:
Determination of the environmental factors controlling earth surface processes and landform patterns is one of the central themes in physical geography. However, the identification of the main drivers of the geomorphological phenomena is often challenging. Novel spatial analysis and modelling methods could provide new insights into the process-environment relationships. The objective of this research was to map and quantitatively analyse the occurrence of cryogenic phenomena in subarctic Finland. More precisely, utilising a grid-based approach the distribution and abundance of periglacial landforms were modelled to identify important landscape scale environmental factors. The study was performed using a comprehensive empirical data set of periglacial landforms from an area of 600 km2 at a 25-ha resolution. The utilised statistical methods were generalized linear modelling (GLM) and hierarchical partitioning (HP). GLMs were used to produce distribution and abundance models and HP to reveal independently the most likely causal variables. The GLM models were assessed utilising statistical evaluation measures, prediction maps, field observations and the results of HP analyses. A total of 40 different landform types and subtypes were identified. Topographical, soil property and vegetation variables were the primary correlates for the occurrence and cover of active periglacial landforms on the landscape scale. In the model evaluation, most of the GLMs were shown to be robust although the explanation power, prediction ability as well as the selected explanatory variables varied between the models. The great potential of the combination of a spatial grid system, terrain data and novel statistical techniques to map the occurrence of periglacial landforms was demonstrated in this study. GLM proved to be a useful modelling framework for testing the shapes of the response functions and significances of the environmental variables and the HP method helped to make better deductions of the important factors of earth surface processes. Hence, the numerical approach presented in this study can be a useful addition to the current range of techniques available to researchers to map and monitor different geographical phenomena.
Resumo:
The rupture of a cerebral artery aneurysm causes a devastating subarachnoid hemorrhage (SAH), with a mortality of almost 50% during the first month. Each year, 8-11/100 000 people suffer from aneurysmal SAH in Western countries, but the number is twice as high in Finland and Japan. The disease is most common among those of working age, the mean age at rupture being 50-55 years. Unruptured cerebral aneurysms are found in 2-6% of the population, but knowledge about the true risk of rupture is limited. The vast majority of aneurysms should be considered rupture-prone, and treatment for these patients is warranted. Both unruptured and ruptured aneurysms can be treated by either microsurgical clipping or endovascular embolization. In a standard microsurgical procedure, the neck of the aneurysm is closed by a metal clip, sealing off the aneurysm from the circulation. Endovascular embolization is performed by packing the aneurysm from the inside of the vessel lumen with detachable platinum coils. Coiling is associated with slightly lower morbidity and mortality than microsurgery, but the long-term results of microsurgically treated aneurysms are better. Endovascular treatment methods are constantly being developed further in order to achieve better long-term results. New coils and novel embolic agents need to be tested in a variety of animal models before they can be used in humans. In this study, we developed an experimental rat aneurysm model and showed its suitability for testing endovascular devices. We optimized noninvasive MRI sequences at 4.7 Tesla for follow-up of coiled experimental aneurysms and for volumetric measurement of aneurysm neck remnants. We used this model to compare platinum coils with polyglycolic-polylactic acid (PGLA) -coated coils, and showed the benefits of the latter in this model. The experimental aneurysm model and the imaging methods also gave insight into the mechanisms involved in aneurysm formation, and the model can be used in the development of novel imaging techniques. This model is affordable, easily reproducible, reliable, and suitable for MRI follow-up. It is also suitable for endovascular treatment, and it evades spontaneous occlusion.
Resumo:
The aim of this report is to discuss the role of the relationship type and communication in two Finnish food chains, namely the pig meat-to-sausage (pig meat chain) and the cereal-to-rye bread (rye chain) chains. Furthermore, the objective is to examine those factors influencing the choice of a relationship type and the sustainability of a business relationship. Altogether 1808 questionnaires were sent to producers, processors and retailers operating in these two chains of which 224 usable questionnaires were returned (the response rate being 12.4%). The great majority of the respondents (98.7%) were small businesses employing less than 50 people. Almost 70 per cent of the respondents were farmers. In both chains, formal contracts were stated to be the most important relationship type used with business partners. Although for many businesses written contracts are a common business practice, the essential role of the contracts was the security they provide regarding the demand/supply and quality issues. Relative to the choice of the relationship types, the main difference between the two chains emerged especially with the prevalence of spot markets and financial participation arrangements. The usage of spot markets was significantly more common in the rye chain when compared to the pig meat chain, while, on the other hand, financial participation arrangements were much more common among the businesses in the pig meat chain than in the rye chain. Furthermore, the analysis showed that most of the businesses in the pig meat chain claimed not to be free to choose the relationship type they use. Especially membership in a co-operative and practices of a business partner were mentioned as the reasons limiting this freedom of choice. The main business relations in both chains were described as having a long-term orientation and being based on formal written contracts. Typical for the main business relationships was also that they are not based on the existence of the key persons only; the relationship would remain even if the key people left the business. The quality of these relationships was satisfactory in both chains and across all the stakeholder groups, though the downstream processors and the retailers had a slightly more positive view on their main business partners than the farmers and the upstream processors. The businesses operating in the pig meat chain seemed also to be more dependent on their main business relations when compared to the businesses in the rye chain. Although the communication means were rather similar in both chains (the phone being the most important), there was some variation between the chains concerning the communication frequency necessary to maintain the relationship with the main business partner. In short, the businesses in the pig meat chain seemed to appreciate more frequent communication with their main business partners when compared to the businesses in the rye chain. Personal meetings with the main business partners were quite rare in both chains. All the respondent groups were, however, fairly satisfied with the communication frequency and information quality between them and the main business partner. The business cultures could be argued to be rather hegemonic among the businesses both in the pig meat and rye chains. Avoidance of uncertainty, appreciation of long-term orientation and independence were considered important factors in the business cultures. Furthermore, trust, commitment and satisfaction in business partners were thought to be essential elements of business operations in all the respondent groups. In order to investigate which factors have an effect on the choice of a relationship type, several hypotheses were tested by using binary and multinomial logit analyses. According to these analyses it could be argued that avoidance of uncertainty and risk has a certain effect on the relationship type chosen, i.e. the willingness to avoid uncertainty increases the probability to choose stable relationships, like repeated market transactions and formal written contracts, but not necessary those, which require high financial commitment (like financial participation arrangements). The probability of engaging in financial participation arrangements seemed to increase with long-term orientation. The hypotheses concerning the sustainability of the economic relations were tested by using structural equation model (SEM). In the model, five variables were found to have a positive and statistically significant impact on the sustainable economic relationship construct. Ordered relative to their importance, those factors are: (i) communication quality, (ii) personal bonds, (iii) equal power distribution, (iv) local embeddedness and (v) competition.
Resumo:
Mikael Juselius’ doctoral dissertation covers a range of significant issues in modern macroeconomics by empirically testing a number of important theoretical hypotheses. The first essay presents indirect evidence within the framework of the cointegrated VAR model on the elasticity of substitution between capital and labor by using Finnish manufacturing data. Instead of estimating the elasticity of substitution by using the first order conditions, he develops a new approach that utilizes a CES production function in a model with a 3-stage decision process: investment in the long run, wage bargaining in the medium run and price and employment decisions in the short run. He estimates the elasticity of substitution to be below one. The second essay tests the restrictions implied by the core equations of the New Keynesian Model (NKM) in a vector autoregressive model (VAR) by using both Euro area and U.S. data. Both the new Keynesian Phillips curve and the aggregate demand curve are estimated and tested. The restrictions implied by the core equations of the NKM are rejected on both U.S. and Euro area data. These results are important for further research. The third essay is methodologically similar to essay 2, but it concentrates on Finnish macro data by adopting a theoretical framework of an open economy. Juselius’ results suggests that the open economy NKM framework is too stylized to provide an adequate explanation for Finnish inflation. The final essay provides a macroeconometric model of Finnish inflation and associated explanatory variables and it estimates the relative importance of different inflation theories. His main finding is that Finnish inflation is primarily determined by excess demand in the product market and by changes in the long-term interest rate. This study is part of the research agenda carried out by the Research Unit of Economic Structure and Growth (RUESG). The aim of RUESG it to conduct theoretical and empirical research with respect to important issues in industrial economics, real option theory, game theory, organization theory, theory of financial systems as well as to study problems in labor markets, macroeconomics, natural resources, taxation and time series econometrics. RUESG was established at the beginning of 1995 and is one of the National Centers of Excellence in research selected by the Academy of Finland. It is financed jointly by the Academy of Finland, the University of Helsinki, the Yrjö Jahnsson Foundation, Bank of Finland and the Nokia Group. This support is gratefully acknowledged.
Resumo:
The aim of this dissertation is to model economic variables by a mixture autoregressive (MAR) model. The MAR model is a generalization of linear autoregressive (AR) model. The MAR -model consists of K linear autoregressive components. At any given point of time one of these autoregressive components is randomly selected to generate a new observation for the time series. The mixture probability can be constant over time or a direct function of a some observable variable. Many economic time series contain properties which cannot be described by linear and stationary time series models. A nonlinear autoregressive model such as MAR model can a plausible alternative in the case of these time series. In this dissertation the MAR model is used to model stock market bubbles and a relationship between inflation and the interest rate. In the case of the inflation rate we arrived at the MAR model where inflation process is less mean reverting in the case of high inflation than in the case of normal inflation. The interest rate move one-for-one with expected inflation. We use the data from the Livingston survey as a proxy for inflation expectations. We have found that survey inflation expectations are not perfectly rational. According to our results information stickiness play an important role in the expectation formation. We also found that survey participants have a tendency to underestimate inflation. A MAR model has also used to model stock market bubbles and crashes. This model has two regimes: the bubble regime and the error correction regime. In the error correction regime price depends on a fundamental factor, the price-dividend ratio, and in the bubble regime, price is independent of fundamentals. In this model a stock market crash is usually caused by a regime switch from a bubble regime to an error-correction regime. According to our empirical results bubbles are related to a low inflation. Our model also imply that bubbles have influences investment return distribution in both short and long run.