898 resultados para Threshold regression


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we illustrate experimentally an important consequence of the stochastic component in choice behaviour which has not been acknowledged so far. Namely, its potential to produce ‘regression to the mean’ (RTM) effects. We employ a novel approach to individual choice under risk, based on repeated multiple-lottery choices (i.e. choices among many lotteries), to show how the high degree of stochastic variability present in individual decisions can distort crucially certain results through RTM effects. We demonstrate the point in the context of a social comparison experiment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We test the expectations theory of the term structure of U.S. interest rates in nonlinear systems. These models allow the response of the change in short rates to past values of the spread to depend upon the level of the spread. The nonlinear system is tested against a linear system, and the results of testing the expectations theory in both models are contrasted. We find that the results of tests of the implications of the expectations theory depend on the size and sign of the spread. The long maturity spread predicts future changes of the short rate only when it is high.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND. To use spectra acquired by matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS) from pre- and post-digital rectal examination (DRE) urine samples to search for discriminating peaks that can adequately distinguish between benign and malignant prostate conditions, and identify the peaks’ underlying biomolecules. METHODS. Twenty-five participants with prostate cancer (PCa) and 27 participants with a variety of benign prostatic conditions as confirmed by a 10-core tissue biopsy were included. Pre- and post-DRE urine samples were prepared for MALDI MS profiling using an automated clean-up procedure. Following mass spectra collection and processing, peak mass and intensity were extracted and subjected to statistical analysis to identify peaks capable of distinguishing between benign and cancer. Logistic regression was used to combine markers to create a sensitive and specific test. RESULTS. A peak at m/z 10,760 was identified as b-microseminoprotein (b-MSMB) and found to be statistically lower in urine from PCa participants using the peak’s average areas. By combining serum prostate-specific antigen (PSA) levels with MALDI MS-measured b-MSMB levels, optimum threshold values obtained from Receiver Operator characteristics curves gave an increased sensitivity of 96% at a specificity of 26%. CONCLUSIONS. These results demonstrate that with a simple sample clean-up followed by MALDI MS profiling, significant differences of MSMB abundance were found in post-DRE urine samples. In combination with PSA serum levels, obtained from a classic clinical assay led to high classification accuracy for PCa in the studied sample set. Our results need to be validated in a larger multicenter prospective randomized clinical trial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We explore the large spatial variation in the relationship between population density and burned area, using continental-scale Geographically Weighted Regression (GWR) based on 13 years of satellite-derived burned area maps from the global fire emissions database (GFED) and the human population density from the gridded population of the world (GPW 2005). Significant relationships are observed over 51.5% of the global land area, and the area affected varies from continent to continent: population density has a significant impact on fire over most of Asia and Africa but is important in explaining fire over < 22% of Europe and Australia. Increasing population density is associated with both increased and decreased in fire. The nature of the relationship depends on land-use: increasing population density is associated with increased burned are in rangelands but with decreased burned area in croplands. Overall, the relationship between population density and burned area is non-monotonic: burned area initially increases with population density and then decreases when population density exceeds a threshold. These thresholds vary regionally. Our study contributes to improved understanding of how human activities relate to burned area, and should contribute to a better estimate of atmospheric emissions from biomass burning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper combines and generalizes a number of recent time series models of daily exchange rate series by using a SETAR model which also allows the variance equation of a GARCH specification for the error terms to be drawn from more than one regime. An application of the model to the French Franc/Deutschmark exchange rate demonstrates that out-of-sample forecasts for the exchange rate volatility are also improved when the restriction that the data it is drawn from a single regime is removed. This result highlights the importance of considering both types of regime shift (i.e. thresholds in variance as well as in mean) when analysing financial time series.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Taste and smell detection threshold measurements are frequently time consuming especially when the method involves reversing the concentrations presented to replicate and improve accuracy of results. These multiple replications are likely to cause sensory and cognitive fatigue which may be more pronounced in elderly populations. A new rapid detection threshold methodology was developed that quickly located the likely position of each individuals sensory detection threshold then refined this by providing multiple concentrations around this point to determine their threshold. This study evaluates the reliability and validity of this method. Findings indicate that this new rapid detection threshold methodology was appropriate to identify differences in sensory detection thresholds between different populations and has positive benefits in providing a shorter assessment of detection thresholds. The results indicated that this method is appropriate at determining individual as well as group detection thresholds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An efficient two-level model identification method aiming at maximising a model׳s generalisation capability is proposed for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularisation parameters in the elastic net are optimised using a particle swarm optimisation (PSO) algorithm at the upper level by minimising the leave one out (LOO) mean square error (LOOMSE). There are two elements of original contributions. Firstly an elastic net cost function is defined and applied based on orthogonal decomposition, which facilitates the automatic model structure selection process with no need of using a predetermined error tolerance to terminate the forward selection process. Secondly it is shown that the LOOMSE based on the resultant ENOFR models can be analytically computed without actually splitting the data set, and the associate computation cost is small due to the ENOFR procedure. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new class of parameter estimation algorithms is introduced for Gaussian process regression (GPR) models. It is shown that the integration of the GPR model with probability distance measures of (i) the integrated square error and (ii) Kullback–Leibler (K–L) divergence are analytically tractable. An efficient coordinate descent algorithm is proposed to iteratively estimate the kernel width using golden section search which includes a fast gradient descent algorithm as an inner loop to estimate the noise variance. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: This study provides the first large scale analysis of the age at which adolescents in medieval England entered and completed the pubertal growth spurt. This new method has implications for expanding our knowledge of adolescent maturation across different time periods and regions. Methods: In total, 994 adolescent skeletons (10-25 years) from four urban sites in medieval England (AD 900-1550) were analysed for evidence of pubertal stage using new osteological techniques developed from the clinical literature (i.e. hamate hook development, CVM, canine mineralisation, iliac crest ossification, radial fusion). Results: Adolescents began puberty at a similar age to modern children at around 10-12 years, but the onset of menarche in girls was delayed by up to 3 years, occurring around 15 for most in the study sample and 17 years for females living in London. Modern European males usually complete their maturation by 16-18 years; medieval males took longer with the deceleration stage of the growth spurt extending as late as 21 years. Conclusions: This research provides the first attempt to directly assess the age of pubertal development in adolescents during the tenth to seventeenth centuries. Poor diet, infections, and physical exertion may have contributed to delayed development in the medieval adolescents, particularly for those living in the city of London. This study sheds new light on the nature of adolescence in the medieval period, highlighting an extended period of physical and social transition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forecasting wind power is an important part of a successful integration of wind power into the power grid. Forecasts with lead times longer than 6 h are generally made by using statistical methods to post-process forecasts from numerical weather prediction systems. Two major problems that complicate this approach are the non-linear relationship between wind speed and power production and the limited range of power production between zero and nominal power of the turbine. In practice, these problems are often tackled by using non-linear non-parametric regression models. However, such an approach ignores valuable and readily available information: the power curve of the turbine's manufacturer. Much of the non-linearity can be directly accounted for by transforming the observed power production into wind speed via the inverse power curve so that simpler linear regression models can be used. Furthermore, the fact that the transformed power production has a limited range can be taken care of by employing censored regression models. In this study, we evaluate quantile forecasts from a range of methods: (i) using parametric and non-parametric models, (ii) with and without the proposed inverse power curve transformation and (iii) with and without censoring. The results show that with our inverse (power-to-wind) transformation, simpler linear regression models with censoring perform equally or better than non-linear models with or without the frequently used wind-to-power transformation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of Sao Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of Sao Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society