10 resultados para Mean Absolute Scaled Error (MASE)

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper concentrates on Heraclitus, Parmenides and Lao Zi. The focus is on their ideas on change and whether the world is essentially One or if it is composed of many entities. In the first chapter I go over some general tendences in Greek and Chinese philosophy. The differences in the cultural background have an influence in the ways philosophy is made, but the paper aims to show that two questions can be brought up when comparing the philosophies of Heraclitus, Parmenides and Lao Zi. The questions are; is the world essentially One or Many? Is change real and if it is, what is the nature of it and how does it take place? For Heraclitus change is real, and as will be shown later in the chapter, quite essential for the sustainability of the world-order (kosmos). The key-concept in the case of Heraclitus is Logos. Heraclitus uses Logos in several senses, most well known relating to his element-theory. But another important feature of the Logos, the content of real wisdom, is to be able to regard everything as one. This does not mean that world is essentially one for Heraclitus in the ontological sense, but that we should see the underlying unity of multiple phenomena. Heraclitus regards this as hen panta: All from One, One from All. I characterize Heraclitus as epistemic monist and an ontological pluralist. It is plausible that the views of Heraclitus on change were the focus of Parmenides’ severe criticism. Parmenides held the view that the world is essentially one and that to see it as consisting of many entities was the error of mortals, i.e. the common man and his philosophical predecessors. For Parmenides what-is, can be approached by two routes; The Way of Truth (Aletheia) and The Way of Seeming (Doxa). Aletheia essentially sees the world as one, where even time is an illusion. In Doxa Parmenides is giving an explanation of the world seen as consisting of many entities and this is his contribution to the line of thought of his predecessors. It should be noted that a strong emphasis is given to the Aletheia, whereas the world-view given is in Doxa is only probable. I go on to describe Parmenides as ontological monist, who gives some plausibility to pluralistic views. In the work of Lao Zi world can be seen as One or as consisting of Many entities. In my interpretation, Lao Zi uses Dao in two different senses; Dao is the totality of things or the order in change. The wu-aspect (seeing-without-form) attends the world as one, whereas the you-aspect attends the world of many entities. In wu-aspect, Dao refers to the totality of things, when in you-aspect Dao is the order or law in change. There are two insights in Lao Zi regarding the relationship between wu- and- you-apects; in ch.1 it is stated that they are two separate aspects in seeing the world, the other chapters regarding that you comes from wu. This naturally brings in the question whether the One is the peak of seeing the world as many. In other words, is there a way from pluralism to monism. All these considerations make it probable that the work attributed to Lao Zi has been added new material or is a compilation of oral sayings. In the end of the paper I will go on to give some insights on how Logos and Dao can be compared in a relevant manner. I also compare Parmenides holistic monism to Lao Zi’s Dao as nameless totality (i.e. in its wu-aspect). I briefly touch the issues of Heidegger and the future of comparative philosophy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examined year seven students´ proactive coping, self-efficacy and social support seeking. Proactive coping was defined as a behaviour where obstacles are seen as a challenge. In proactive coping, individuals set goals, build up resources and regulate their behaviour to achieve the goals. Self-efficacy can be seen as people’s beliefs about their capabilities. Social support seeking was divided into instrumental support seeking and emotional support seeking. According to the theoretical frame of this study self-efficacy and social support seeking were seen as resources to proactive coping (Greenglass 2002). The participants were 445 year seven students (Mo= 13 years) from seven secondary schools. The data was collected in March-May 2008. The survey consisted 37 Likert-scaled items from the Proactive Coping Inventory and from the General Self-Efficacy Scale. The survey consisted of four scales: Proactive Coping, Instrumental Support Seeking, Emotional Support Seeking and General Self-Efficacy. The participants' age, gender and studying in specialist streams were asked as background information. As a result, most of the participants (62 % girls, 38 % boys) reported fairly strong proactive coping: they can see obstacles as a challenge and they set goals and regulate their behaviour to achieve the goals. Most of the participants reported that they seek instrumental and emotional support when having troubles. Girls reported more social support seeking than did boys and the mean difference was statistically significant. Most of the participants had fairly high sense of self-efficacy. However, 4 % of the participants reported that they don’t believe in their capabilities. Some of these participants reported that they neither use proactive coping nor seek informational or emotional support when having troubles. Proactive coping correlated positively with self-efficacy and with social support seeking. In this study self-efficacy and social support seeking explained 47 % of proactive coping. It was discussed that children’s high sense of self-efficacy and social relationships can act as protective factors in transition to secondary school. When supporting children’s self-efficacy and social relationships one also assists children’s proactive coping. Proactive coping can be seen to support children’s personal growth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important challenge in forest industry is to get the appropriate raw material out from the forests to the wood processing industry. Growth and stem reconstruction simulators are therefore increasingly integrated in industrial conversion simulators, for linking the properties of wooden products to the three-dimensional structure of stems and their growing conditions. Static simulators predict the wood properties from stem dimensions at the end of a growth simulation period, whereas in dynamic approaches, the structural components, e.g. branches, are incremented along with the growth processes. The dynamic approach can be applied to stem reconstruction by predicting the three-dimensional stem structure from external tree variables (i.e. age, height) as a result of growth to the current state. In this study, a dynamic growth simulator, PipeQual, and a stem reconstruction simulator, RetroSTEM, are adapted to Norway spruce (Picea abies [L.] Karst.) to predict the three-dimensional structure of stems (tapers, branchiness, wood basic density) over time such that both simulators can be integrated in a sawing simulator. The parameterisation of the PipeQual and RetroSTEM simulators for Norway spruce relied on the theoretically based description of tree structure developing in the growth process and following certain conservative structural regularities while allowing for plasticity in the crown development. The crown expressed both regularity and plasticity in its development, as the vertical foliage density peaked regularly at about 5 m from the stem apex, varying below that with tree age and dominance position (Study I). Conservative stem structure was characterized in terms of (1) the pipe ratios between foliage mass and branch and stem cross-sectional areas at crown base, (2) the allometric relationship between foliage mass and crown length, (3) mean branch length relative to crown length and (4) form coefficients in branches and stem (Study II). The pipe ratio between branch and stem cross-sectional area at crown base, and mean branch length relative to the crown length may differ in trees before and after canopy closure, but the variation should be further analysed in stands of different ages and densities with varying site fertilities and climates. The predictions of the PipeQual and RetroSTEM simulators were evaluated by comparing the simulated values to measured ones (Study III, IV). Both simulators predicted stem taper and branch diameter at the individual tree level with a small bias. RetroSTEM predictions of wood density were accurate. For focusing on even more accurate predictions of stem diameters and branchiness along the stem, both simulators should be further improved by revising the following aspects in the simulators: the relationship between foliage and stem sapwood area in the upper stem, the error source in branch sizes, the crown base development and the height growth models in RetroSTEM. In Study V, the RetroSTEM simulator was integrated in the InnoSIM sawing simulator, and according to the pilot simulations, this turned out to be an efficient tool for readily producing stand scale information about stem sizes and structure when approximating the available assortments of wood products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inadvertent climate modification has led to an increase in urban temperatures compared to the surrounding rural area. The main reason for the temperature rise is the altered energy portioning of input net radiation to heat storage and sensible and latent heat fluxes in addition to the anthropogenic heat flux. The heat storage flux and anthropogenic heat flux have not yet been determined for Helsinki and they are not directly measurable. To the contrary, turbulent fluxes of sensible and latent heat in addition to net radiation can be measured, and the anthropogenic heat flux together with the heat storage flux can be solved as a residual. As a result, all inaccuracies in the determination of the energy balance components propagate to the residual term and special attention must be paid to the accurate determination of the components. One cause of error in the turbulent fluxes is the fluctuation attenuation at high frequencies which can be accounted for by high frequency spectral corrections. The aim of this study is twofold: to assess the relevance of high frequency corrections to water vapor fluxes and to assess the temporal variation of the energy fluxes. Turbulent fluxes of sensible and latent heat have been measured at SMEAR III station, Helsinki, since December 2005 using the eddy covariance technique. In addition, net radiation measurements have been ongoing since July 2007. The used calculation methods in this study consist of widely accepted eddy covariance data post processing methods in addition to Fourier and wavelet analysis. The high frequency spectral correction using the traditional transfer function method is highly dependent on relative humidity and has an 11% effect on the latent heat flux. This method is based on an assumption of spectral similarity which is shown not to be valid. A new correction method using wavelet analysis is thus initialized and it seems to account for the high frequency variation deficit. Anyhow, the resulting wavelet correction remains minimal in contrast to the traditional transfer function correction. The energy fluxes exhibit a behavior characteristic for urban environments: the energy input is channeled to sensible heat as latent heat flux is restricted by water availability. The monthly mean residual of the energy balance ranges from 30 Wm-2 in summer to -35 Wm-2 in winter meaning a heat storage to the ground during summer. Furthermore, the anthropogenic heat flux is approximated to be 50 Wm-2 during winter when residential heating is important.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi- and intralake datasets of fossil midge assemblages in surface sediments of small shallow lakes in Finland were studied to determine the most important environmental factors explaining trends in midge distribution and abundance. The aim was to develop palaeoenvironmental calibration models for the most important environmental variables for the purpose of reconstructing past environmental conditions. The developed models were applied to three high-resolution fossil midge stratigraphies from southern and eastern Finland to interpret environmental variability over the past 2000 years, with special focus on the Medieval Climate Anomaly (MCA), the Little Ice Age (LIA) and recent anthropogenic changes. The midge-based results were compared with physical properties of the sediment, historical evidence and environmental reconstructions based on diatoms (Bacillariophyta), cladocerans (Crustacea: Cladocera) and tree rings. The results showed that the most important environmental factor controlling midge distribution and abundance along a latitudinal gradient in Finland was the mean July air temperature (TJul). However, when the dataset was environmentally screened to include only pristine lakes, water depth at the sampling site became more important. Furthermore, when the dataset was geographically scaled to southern Finland, hypolimnetic oxygen conditions became the dominant environmental factor. The results from an intralake dataset from eastern Finland showed that the most important environmental factors controlling midge distribution within a lake basin were river contribution, water depth and submerged vegetation patterns. In addition, the results of the intralake dataset showed that the fossil midge assemblages represent fauna that lived in close proximity to the sampling sites, thus enabling the exploration of within-lake gradients in midge assemblages. Importantly, this within-lake heterogeneity in midge assemblages may have effects on midge-based temperature estimations, because samples taken from the deepest point of a lake basin may infer considerably colder temperatures than expected, as shown by the present test results. Therefore, it is suggested here that the samples in fossil midge studies involving shallow boreal lakes should be taken from the sublittoral, where the assemblages are most representative of the whole lake fauna. Transfer functions between midge assemblages and the environmental forcing factors that were significantly related with the assemblages, including mean air TJul, water depth, hypolimnetic oxygen, stream flow and distance to littoral vegetation, were developed using weighted averaging (WA) and weighted averaging-partial least squares (WA-PLS) techniques, which outperformed all the other tested numerical approaches. Application of the models in downcore studies showed mostly consistent trends. Based on the present results, which agreed with previous studies and historical evidence, the Medieval Climate Anomaly between ca. 800 and 1300 AD in eastern Finland was characterized by warm temperature conditions and dry summers, but probably humid winters. The Little Ice Age (LIA) prevailed in southern Finland from ca. 1550 to 1850 AD, with the coldest conditions occurring at ca. 1700 AD, whereas in eastern Finland the cold conditions prevailed over a longer time period, from ca. 1300 until 1900 AD. The recent climatic warming was clearly represented in all of the temperature reconstructions. In the terms of long-term climatology, the present results provide support for the concept that the North Atlantic Oscillation (NAO) index has a positive correlation with winter precipitation and annual temperature and a negative correlation with summer precipitation in eastern Finland. In general, the results indicate a relatively warm climate with dry summers but snowy winters during the MCA and a cool climate with rainy summers and dry winters during the LIA. The results of the present reconstructions and the forthcoming applications of the models can be used in assessments of long-term environmental dynamics to refine the understanding of past environmental reference conditions and natural variability required by environmental scientists, ecologists and policy makers to make decisions concerning the presently occurring global, regional and local changes. The developed midge-based models for temperature, hypolimnetic oxygen, water depth, littoral vegetation shift and stream flow, presented in this thesis, are open for scientific use on request.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.