15 resultados para ERROR rates

em Helda - Digital Repository of University of Helsinki


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Delay and disruption tolerant networks (DTNs) are computer networks where round trip delays and error rates are high and disconnections frequent. Examples of these extreme networks are space communications, sensor networks, connecting rural villages to the Internet and even interconnecting commodity portable wireless devices and mobile phones. Basic elements of delay tolerant networks are a store-and-forward message transfer resembling traditional mail delivery, an opportunistic and intermittent routing, and an extensible cross-region resource naming service. Individual nodes of the network take an active part in routing the traffic and provide in-network data storage for application data that flows through the network. Application architecture for delay tolerant networks differs also from those used in traditional networks. It has become feasible to design applications that are network-aware and opportunistic, taking an advantage of different network connection speeds and capabilities. This might change some of the basic paradigms of network application design. DTN protocols will also support in designing applications which depend on processes to be persistent over reboots and power failures. DTN protocols could also be applicable to traditional networks in cases where high tolerance to delays or errors would be desired. It is apparent that challenged networks also challenge the traditional strictly layered model of network application design. This thesis provides an extensive introduction to delay tolerant networking concepts and applications. Most attention is given to challenging problems of routing and application architecture. Finally, future prospects of DTN applications and implementations are envisioned through recent research results and an interview with an active researcher of DTN networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lasten ylähengitystiekirurgia (kita-nielurisojen poisto ja tärykalvon putkitus) on länsimaissa erittäin yleistä. Leikkausten lukumäärät vaihtelevat niin kansallisesti kuin kansainvälisestikin, mutta selvää syytä näille eroille ei tiedetä. Hoitosuositusten merkitys käytäntöihin on kyseenalaistettu ja voi olla, ettei hoitosuosituksia noudateta. Leikkaukset saattavat aiheuttaa lapsipotilaille psykologisen vamman, ja lisäksi niihin sisältyy komplikaatioiden, jopa kuoleman, vaara. Jotta haittoja voidaan välttää, on tärkeää tunnistaa ne lapset, jotka hyötyvät leikkauksesta. Ongelma on paitsi lääketieteellinen, myös taloudellinen: ylähengitystiekirurgiasta aiheutuu merkittäviä kuluja. Leikkausmäärien arvioiminen on tärkeää, jotta leikkauskäytäntöjä voidaan järkeistää. Tässä väitöskirjatyössä tutkittiin ylähengitystieleikkausten määriä Suomessa ja Norjassa sekä näiden kahden maan välillä. Aiempaa tutkimusta aiheesta ei kummassakaan maassa ole tehty. Kitarisanpoiston, välikorvan putkituksen, tärykalvopiston, nielurisanpoiston ja kita- ja nielurisanpoiston leikkausmäärät saatiin kansallisista tietokannoista. Lukuja verrattiin ko. maan lasten lukumäärään, maantieteelliseen sijoittumiseen sekä lasten ikään ja sukupuoleen. Lisäksi leikkausmääriä arvioitiin suhteessa korva-, nenä- ja kurkkulääkäreiden sekä yleislääkäreiden määrään, maantieteelliseen sijoittumiseen ja lääkäreiden ikään ja sukupuoleen. Leikkausten määrissä havaittiin suurta vaihtelua niin Suomessa kuin Norjassa. Suomessa suurimmat erot leikkausmäärissä löydettiin läntisen ja itäisen miljoonapiirin välillä. Läntisessä piirissä tehtiin lähes kaksin kertaa enemmän leikkauksia kuin itäisessä piirissä. Norjassa suurimmat erot olivat pohjoisen ja itäisen piirin välillä. Pohjoisessa piirissä tehtiin kaksinkertainen määrä leikkauksia itäiseen piirrin verrattuna. Suomessa tehtiin tutkimuksen koko aikavälillä enemmän kitarisanpoistoja kuin Norjassa, mutta ko. leikkausten määrä oli maassamme selvästi laskussa. Vuonna 2002 Suomessa tehtiin 2,5 kertaa enemmän kitarisanpoistoja kuin Norjassa. (Kita)nielurisanpoistoja tehtiin kuitenkin Suomessa vähemmän kuin Norjassa. Näiden leikkausten määrät pysyivät tutkimuksen aikavälillä Suomessa samalla tasolla, kun Norjassa leikkausmäärät hieman nousivat. Suomalaisia lapsia leikattiin keskimäärin paljon nuorempina kuin norjalaisia lapsia. Tutkimuksessa ei löydetty selitystä ylähengitystieleikkausten määrän suurelle vaihtelulle Suomessa ja Norjassa tai maiden välillä. Kuitenkin Suomessa tehtyjen kitarisanpoistojen huomattavan vähenemisen myötä maiden ylähengitystieleikkausten määrät lähenivät toisiaan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pristine peatlands are carbon (C) accumulating wetland ecosystems sustained by a high water level (WL) and consequent anoxia that slows down decomposition. Persistent WL drawdown as a response to climate and/or land-use change directly affects decomposition: increased oxygenation stimulates decomposition of the old C (peat) sequestered under prior anoxic conditions. Responses of the new C (plant litter) in terms of quality, production and decomposability, and the consequences for the whole C cycle of peatlands are not fully understood. WL drawdown induces changes in plant community resulting in shift in dominance from Sphagnum and graminoids to shrubs and trees. There is increasing evidence that the indirect effects of WL drawdown via the changes in plant communities will have more impact on the ecosystem C cycling than any direct effects. The aim of this study is to disentangle the direct and indirect effects of WL drawdown on the new C by measuring the relative importance of 1) environmental parameters (WL depth, temperature, soil chemistry) and 2) plant community composition on litter production, microbial activity, litter decomposition rates and, consequently, on the C accumulation. This information is crucial for modelling C cycle under changing climate and/or land-use. The effects of WL drawdown were tested in a large-scale experiment with manipulated WL at two time scales and three nutrient regimes. Furthermore, the effect of climate on litter decomposability was tested along a north-south gradient. Additionally, a novel method for estimating litter chemical quality and decomposability was explored by combining Near infrared spectroscopy with multivariate modelling. WL drawdown had direct effects on litter quality, microbial community composition and activity and litter decomposition rates. However, the direct effects of WL drawdown were overruled by the indirect effects via changes in litter type composition and production. Short-term (years) responses to WL drawdown were small. In long-term (decades), dramatically increased litter inputs resulted in large accumulation of organic matter in spite of increased decomposition rates. Further, the quality of the accumulated matter greatly changed from that accumulated in pristine conditions. The response of a peatland ecosystem to persistent WL drawdown was more pronounced at sites with more nutrients. The study demonstrates that the shift in vegetation composition as a response to climate and/or land-use change is the main factor affecting peatland ecosystem C cycle and thus dynamic vegetation is a necessity in any models applied for estimating responses of C fluxes to changes in the environment. The time scale for vegetation changes caused by hydrological changes needs to extend to decades. This study provides grouping of litter types (plant species and part) into functional types based on their chemical quality and/or decomposability that the models could utilize. Further, the results clearly show a drop in soil temperature as a response to WL drawdown when an initially open peatland converts into a forest ecosystem, which has not yet been considered in the existing models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with using the bootstrap to obtain improved critical values for the error correction model (ECM) cointegration test in dynamic models. In the paper we investigate the effects of dynamic specification on the size and power of the ECM cointegration test with bootstrap critical values. The results from a Monte Carlo study show that the size of the bootstrap ECM cointegration test is close to the nominal significance level. We find that overspecification of the lag length results in a loss of power. Underspecification of the lag length results in size distortion. The performance of the bootstrap ECM cointegration test deteriorates if the correct lag length is not used in the ECM. The bootstrap ECM cointegration test is therefore not robust to model misspecification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

According to a large body of evidence, carotid endarterectomy (CEA) can prevent strokes, provided that appropriate inclusion criteria and high-quality perioperative treatment methods are utilised with low complication rates. From the patient s perspective, it is of paramount importance that the operation is as safe and effective as possible. From the community s point of view, it is important that CEA provision prevents as many strokes as possible. In order to define the stroke preventing potential of CEA in different communities, a comparison between eight European countries and Australia was performed including 53 077 carotid interventions. A more detailed evaluation was performed in Finland, the United Kingdom and Egypt. It could be estimated that many potentially preventable strokes occur due to insufficient diagnostics and CEA provision. The number of CEAs should be at least doubled in the Helsinki region. The theoretical power of CEA provision in stroke prevention varied significantly between the countries. Delay from symptom to surgery has been identified as one of the most important factors influencing the effectiveness of CEA. In 2008 only 11% of CEAs in Helsinki university central hospital (HUCH) were performed within the recommended14 days. Registered data of 673 CEAs in HUCH during 2000-2005 was analyzed. There was no systematic error that would have changed the outcome analysis. However it is important that registers are audited regularly and cross matching of different registries is possible. A previously unpublished method of combining medial mandibulotomy, neck incision and carotid artery interposition was carried out as a collaboration of maxillofacial, ear, nose and throat and vascular surgeons. Five patients were operated on with a technique that was feasible and possible to perform with little morbidity, but due to the significant risks involved, this technique should be reserved for carefully selected cases. In stroke prevention, organisational decisions seem far more important than details in interventional procedures when CEA is performed with low complication rates, as was the case in the present study. A TIA clinic approach with close co-operation between the on-call vascular surgeons, neurologists and radiologists should be available at all centres treating these patients. Patients should have a direct and fast admission to the hospital performing CEA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present simple methods for construction and evaluation of finite-state spell-checking tools using an existing finite-state lexical automaton, freely available finite-state tools and Internet corpora acquired from projects such as Wikipedia. As an example, we use a freely available open-source implementation of Finnish morphology, made with traditional finite-state morphology tools, and demonstrate rapid building of Northern Sámi and English spell checkers from tools and resources available from the Internet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Benthic processes were measured at a coastal deposition area in the northern Baltic Sea, covering all seasons. The N-2 production rates, 90-400 mu mol N m(-2) d(-1), were highest in autumn-early winter and lowest in spring. Heterotrophic bacterial production peaked unexpectedly late in the year, indicating that in addition to the temperature, the availability of carbon compounds suitable for the heterotrophic bacteria also plays a major role in regulating the denitrification rate. Anaerobic ammonium oxidation (anammox) was measured in spring and autumn and contributed 10% and 15%, respectively, to the total N-2 production. The low percentage did, however, result in a significant error in the total N-2 production rate estimate, calculated using the isotope pairing technique. Anammox must be taken into account in the Gulf of Finland in future sediment nitrogen cycling research.