68 resultados para Recursive logit
Resumo:
Background: Poor diet quality is a major public health concern that has prompted governments to introduce a range of measures to promote healthy eating. For these measures to be effective, they should target segments of the population with messages relevant to their needs, aspirations and circumstances. The present study investigates the extent to which attitudes and constraints influence healthy eating, as well as how these vary by demographic characteristics of the UK population. It further considers how such information may be used in segmented diet and health policy messages. Methods: A survey of 250 UK adults elicited information on conformity to dietary guidelines, attitudes towards healthy eating, constraints to healthy eating and demographic characteristics. Ordered logit regressions were estimated to determine the importance of attitudes and constraints in determining how closely respondents follow healthy eating guidelines. Further regressions explored the demographic characteristics associated with the attitudinal and constraint variables. Results: People who attach high importance to their own health and appearance eat more healthily than those who do not. Risk-averse people and those able to resist temptation also eat more healthily. Shortage of time is considered an important barrier to healthy eating, although the cost of a healthy diet is not. These variables are associated with a number of demographic characteristics of the population; for example, young adults are more motivated to eat healthily by concerns over their appearance than their health. Conclusions: The approach employed in the present study could be used to inform future healthy eating campaigns. For example, messages to encourage the young to eat more healthily could focus on the impact of diets on their appearance rather than health.
Resumo:
Expectations of future market conditions are generally acknowledged to be crucial for the development decision and hence for shaping the built environment. This empirical study of the Central London office market from 1987 to 2009 tests for evidence of adaptive and naive expectations. Applying VAR models and a recursive OLS regression with one-step forecasts, we find evidence of adaptive and naïve, rather than rational expectations of developers. Although the magnitude of the errors and the length of time lags vary over time and development cycles, the results confirm that developers’ decisions are explained to a large extent by contemporaneous and past conditions in both London submarkets. The corollary of this finding is that developers may be able to generate excess profits by exploiting market inefficiencies but this may be hindered in practice by the long periods necessary for planning and construction of the asset. More generally, the results of this study suggest that real estate cycles are largely generated endogenously rather than being the result of unexpected exogenous shocks.
Resumo:
This study investigates whether commercial offices designed by signature architects in the United States achieve rental premiums compared to commercial offices designed by nonsignature architects. Focusing on buildings designed by winners of the Prizker Prize and the Gold Medal awarded by the American Institute of Architects, we create a sample of commercial office buildings designed by signature architects drawing on CoStar's national database. We use a combination of hedonic regression model and a logit model to estimate the various rent determinants. While the first stage measures the typical rental price differential above the typical building in a particular sub-market over a specific timeframe, the second stage identifies a potential price differential over a set of buildings closely matched on important characteristics (such as age, size, location etc.). We find that in both stages offices design by signature architects exhibit a premium. However these results are preliminary. The premium could be indeed an effect of the name of the architect, but others factors such as micro-market conditions might be the cause. Further tests are needed to confirm the validity of our results.
Resumo:
Logistic models are studied as a tool to convert dynamical forecast information (deterministic and ensemble) into probability forecasts. A logistic model is obtained by setting the logarithmic odds ratio equal to a linear combination of the inputs. As with any statistical model, logistic models will suffer from overfitting if the number of inputs is comparable to the number of forecast instances. Computational approaches to avoid overfitting by regularization are discussed, and efficient techniques for model assessment and selection are presented. A logit version of the lasso (originally a linear regression technique), is discussed. In lasso models, less important inputs are identified and the corresponding coefficient is set to zero, providing an efficient and automatic model reduction procedure. For the same reason, lasso models are particularly appealing for diagnostic purposes.
Resumo:
This paper describes a novel adaptive noise cancellation system with fast tunable radial basis function (RBF). The weight coefficients of the RBF network are adapted by the multi-innovation recursive least square (MRLS) algorithm. If the RBF network performs poorly despite of the weight adaptation, an insignificant node with little contribution to the overall performance is replaced with a new node without changing the model size. Otherwise, the RBF network structure remains unchanged and only the weight vector is adapted. The simulation results show that the proposed approach can well cancel the noise in both stationary and nonstationary ANC systems.
Resumo:
In this paper, we propose a novel online modeling algorithm for nonlinear and nonstationary systems using a radial basis function (RBF) neural network with a fixed number of hidden nodes. Each of the RBF basis functions has a tunable center vector and an adjustable diagonal covariance matrix. A multi-innovation recursive least square (MRLS) algorithm is applied to update the weights of RBF online, while the modeling performance is monitored. When the modeling residual of the RBF network becomes large in spite of the weight adaptation, a node identified as insignificant is replaced with a new node, for which the tunable center vector and diagonal covariance matrix are optimized using the quantum particle swarm optimization (QPSO) algorithm. The major contribution is to combine the MRLS weight adaptation and QPSO node structure optimization in an innovative way so that it can track well the local characteristic in the nonstationary system with a very sparse model. Simulation results show that the proposed algorithm has significantly better performance than existing approaches.
Resumo:
Attribute non-attendance in choice experiments affects WTP estimates and therefore the validity of the method. A recent strand of literature uses attenuated estimates of marginal utilities of ignored attributes. Following this approach, we propose a generalisation of the mixed logit model whereby the distribution of marginal utility coefficients of a stated non-attender has a potentially lower mean and lower variance than those of a stated attender. Model comparison shows that our shrinkage approach fits the data better and produces more reliable WTP estimates. We further find that while reliability of stated attribute non-attendance increases in successive choice experiments, it does not increase when respondents report having ignored the same attribute twice.
Resumo:
We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.
Resumo:
Mobile-to-mobile (M-to-M) communications are expected to play a crucial role in future wireless systems and networks. In this paper, we consider M-to-M multiple-input multiple-output (MIMO) maximal ratio combining system and assess its performance in spatially correlated channels. The analysis assumes double-correlated Rayleigh-and-Lognormal fading channels and is performed in terms of average symbol error probability, outage probability, and ergodic capacity. To obtain the receive and transmit spatial correlation functions needed for the performance analysis, we used a three-dimensional (3D) M-to-M MIMO channel model, which takes into account the effects of fast fading and shadowing. The expressions for the considered metrics are derived as a function of the average signal-to-noise ratio per receive antenna in closed-form and are further approximated using the recursive adaptive Simpson quadrature method. Numerical results are provided to show the effects of system parameters, such as distance between antenna elements, maximum elevation angle of scatterers, orientation angle of antenna array in the x–y plane, angle between the x–y plane and the antenna array orientation, and degree of scattering in the x–y plane, on the system performance. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics
Resumo:
The use of Bayesian inference in the inference of time-frequency representations has, thus far, been limited to offline analysis of signals, using a smoothing spline based model of the time-frequency plane. In this paper we introduce a new framework that allows the routine use of Bayesian inference for online estimation of the time-varying spectral density of a locally stationary Gaussian process. The core of our approach is the use of a likelihood inspired by a local Whittle approximation. This choice, along with the use of a recursive algorithm for non-parametric estimation of the local spectral density, permits the use of a particle filter for estimating the time-varying spectral density online. We provide demonstrations of the algorithm through tracking chirps and the analysis of musical data.
Resumo:
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.
Resumo:
A new sparse kernel density estimator is introduced. Our main contribution is to develop a recursive algorithm for the selection of significant kernels one at time using the minimum integrated square error (MISE) criterion for both kernel selection. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
Quantile forecasts are central to risk management decisions because of the widespread use of Value-at-Risk. A quantile forecast is the product of two factors: the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main findings are that the Heterogenous Autoregressive model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts
Resumo:
Using a choice experiment survey this study examines the UK public's willingness to pay to conserve insect pollinators in relation to the levels of two pollination service benefits: maintaining local produce supplies and the aesthetic benefits of diverse wildflower assemblages. Willingness to pay was estimated using a Bayesian mixed logit with two contrasting controls for attribute non-attendance, exclusion and shrinkage. The results suggest that the UK public have an extremely strong preference to avoid a status quo scenario where pollinator populations and pollination services decline. Total willingness to pay was high and did not significantly vary between the two pollination service outputs, producing a conservative total of £379M over a sample of the tax-paying population of the UK, equivalent to £13.4 per UK taxpayer. Using a basic production function approach, the marginal value of pollination services to these attributes is also extrapolated. The study discusses the implications of these findings and directions for related future research into the non-market value of pollination and other ecosystem services.