950 resultados para accuracy of estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Existing empirical evidence has frequently observed that professional forecasters are conservative and display herding behaviour. Whilst a large number of papers have considered equities as well as macroeconomic series, few have considered the accuracy of forecasts in alternative asset classes such as real estate. We consider the accuracy of forecasts for the UK commercial real estate market over the period 1999-2011. The results illustrate that forecasters display a tendency to under-estimate growth rates during strong market conditions and over-estimate when the market is performing poorly. This conservatism not only results in smoothed estimates but also implies that forecasters display herding behaviour. There is also a marked difference in the relative accuracy of capital and total returns versus rental figures. Whilst rental growth forecasts are relatively accurate, considerable inaccuracy is observed with respect to capital value and total returns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The issue of smoothing in kriging has been addressed either by estimation or simulation. The solution via estimation calls for postprocessing kriging estimates in order to correct the smoothing effect. Stochastic simulation provides equiprobable images presenting no smoothing and reproducing the covariance model. Consequently, these images reproduce both the sample histogram and the sample semivariogram. However, there is still a problem, which is the lack of local accuracy of simulated images. In this paper, a postprocessing algorithm for correcting the smoothing effect of ordinary kriging estimates is compared with sequential Gaussian simulation realizations. Based on samples drawn from exhaustive data sets, the postprocessing algorithm is shown to be superior to any individual simulation realization yet, at the expense of providing one deterministic estimate of the random function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dielectric behaviour of in-situ polymerized thin polypyrrole (PPy) films on synthetic textile substrates were obtained in the 1–18 GHz region using free space transmission and reflection methods. The PPy/para-toluene-2-sulphonic acid (pTSA) coated fabrics exhibited an absorption dominated total shielding effectiveness (SE) of up to −7.34 dB, which corresponds to more than 80% of incident radiation. The permittivity response is significantly influenced by the changes in ambient conditions, sample size and diffraction around the sample. Mathematical diffraction removal, time-gating tools and high gain horns were utilized to improve the permittivity response. A narrow time-gate of 0.15 ns produced accurate response for frequencies above 6.7 GHz and the high gain horns further improved the response in the 7.5–18 GHz range. Errors between calculated and measured values of reflection were most commonly within 2%, indicating good accuracy of the method.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This brief addresses the problem of estimation of both the states and the unknown inputs of a class of systems that are subject to a time-varying delay in their state variables, to an unknown input, and also to an additive uncertain, nonlinear disturbance. Conditions are derived for the solvability of the design matrices of a reduced-order observer for state and input estimation, and for the stability of its dynamics. To improve computational efficiency, a delay-dependent asymptotic stability condition is then developed using the linear matrix inequality formulation. A design procedure is proposed and illustrated by a numerical example.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During knowledge acquisition multiple alternative potential rules all appear equally credible. This paper addresses the dearth of formal analysis about how to select between such alternatives. It presents two hypotheses about the expected impact of selecting between classification rules of differing levels of generality in the absence of other evidence about their likely relative performance on unseen data. It is argued that the accuracy on unseen data of the more general rule will tend to be closer to that of a default rule for the class than will that of the more specific rule. It is also argued that in comparison to the more general rule, the accuracy of the more specific rule on unseen cases will tend to be closer to the accuracy obtained on training data. Experimental evidence is provided in support of these hypotheses. We argue that these hypotheses can be of use in selecting between rules in order to achieve specific knowledge acquisition objectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To describe an alternate approach for the calculation of sensitivity and specificity when analyzing the accuracy of screening tools, which can be used when standard calculations may be inappropriate. SensitivityER (ER denoting event rate) is the number of events correctly predicted, divided by the total number of events. SpecificityER is the amount of time that study participants are predicted to be event negative, divided by the total amount of participant observed time. Variance estimates for these statistics are constructed by bootstrap resampling, taking into account event dependence.

Methods: Standard and alternate approaches for calculating sensitivity and specificity were applied to hospital falls risk screening tool data. In this application, the outcome of interest was a recurrent event, there were multiple applications of the screening tool, delays in screening tool  completion, and patients' follow-up durations were unequal.

Results:
Application of sensitivityER and specificityER to this data not only provided a clearer description of the screening tool's overall accuracy, but also allowed examination of accuracy over time, accuracy in predicting specific event numbers, and evaluation of the added value that screening tool reapplications may have.

Conclusion: SensitivityER and specificityER provide a valuable approach to screening tool evaluation in the clinical setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effective management of our marine ecosystems requires the capability to identify, characterise and predict the distribution of benthic biological communities within the overall seascape architecture. The rapid expansion of seabed mapping studies has seen an increase in the application of automated classification techniques to efficiently map benthic habitats, and the need of techniques to assess confidence of model outputs. We use towed video observations and 11 seafloor complexity variables derived from multibeam echosounder (MBES) bathymetry and backscatter to predict the distribution of 8 dominant benthic biological communities in a 54 km2 site, off the central coast of Victoria, Australia. The same training and evaluation datasets were used to compare the accuracies of a Maximum Likelihood Classifier (MLC) and two new generation decision tree methods, QUEST (Quick Unbiased Efficient Statistical Tree) and CRUISE (Classification Rule with Unbiased Interaction Selection and Estimation), for predicting dominant biological communities. The QUEST classifier produced significantly better results than CRUISE and MLC model runs, with an overall accuracy of 80% (Kappa 0.75). We found that the level of accuracy with the size of training set varies for different algorithms. The QUEST results generally increased in a linear fashion, CRUISE performed well with smaller training data sets, and MLC performed least favourably overall, generating anomalous results with changes to training size. We also demonstrate how predicted habitat maps can provide insights into habitat spatial complexity on the continental shelf. Significant variation between patch-size and habitat types and significant correlations between patch size and depth were also observed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bone is known to adapt to the prevalent strain environment while the variation in strains, e.g., due to mechanical loading, modulates bone remodeling, and modeling. Dynamic strains rather than static strains provide the primary stimulus of bone functional adaptation. The finite element method can be generally used for estimating bone strains, but it may be limited to the static analysis of bone strains since the dynamic analysis requires expensive computation. Direct in vivo strain measurement, in turn, is an invasive procedure, limited to certain superficial bone sites, and requires surgical implementation of strain gauges and thus involves risks (e.g., infection). Therefore, to overcome difficulties associated with the finite element method and the in vivo strain measurements, the flexible multibody simulation approach has been recently introduced as a feasible method to estimate dynamic bone strains during physical activity. The purpose of the present study is to further strengthen the idea of using the flexible multibody approach for the analysis of dynamic bone strains. Besides discussing the background theory, magnetic resonance imaging is integrated into the flexible multibody approach framework so that the actual bone geometry could be better accounted for and the accuracy of prediction improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes an application of camera motion estimation to index cricket games. The shots are labeled with the type of shot: glance left, glance right, left drive, right drive, left cut, right pull and straight drive. The method has the advantages that it is fast and avoids complex image segmentation. The classification of the cricket shots is done using an incremental learning algorithm. We tested the method on over 600 shots and the results show that the system has a classification accuracy of 74%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this note, we examine the size and power properties and the break date estimation accuracy of the Lee and Strazicich (LS, 2003) two break endogenous unit root test, based on two different break date selection methods: minimising the test statistic and minimising the sum of squared residuals (SSR). Our results show that the performance of both Models A and C of the LS test are superior when one uses the minimising SSR procedure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In conventional two-phase channel estimation algorithms for dual-hop multiple-input multiple-output (MIMO) relay systems, the relay-destination channel estimated in the first phase is used for the source-relay channel estimation in the second phase. For these algorithms, the mismatch between the estimated and the true relay-destination channel affects the accuracy of the source-relay channel estimation. In this paper, we investigate the impact of such channel state information (CSI) mismatch on the performance of the two-phase channel estimation algorithm. By explicitly taking into account the CSI mismatch, we develop a robust algorithm to estimate the source-relay channel. Numerical examples demonstrate the improved performance of the proposed algorithm. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural network (NN) models have been widely used in the literature for short-term load forecasting. Their popularity is mainly due to their excellent learning and approximation capability. However, their forecasting performance significantly depends on several factors including initializing parameters, training algorithm, and NN structure. To minimize negative effects of these factors, this paper proposes a practically simple, yet effective and an efficient method to combine forecasts generated by NN models. The proposed method includes three main phases: (i) training NNs with different structures, (ii) selecting best NN models based on their forecasting performance for a validation set, and (iii) combination of forecasts for selected best NNs. Forecast combination is performed through calculating the mean of forecasts generated by best NN models. The performance of the proposed method is examined using real world data set. Comparative studies demonstrate that the accuracy of combined forecasts is significantly superior to those obtained from individual NN models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy μ, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of ≥10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD ≥ 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on error level and spatial scale. Failure to account for large errors relative to the scale of movement can produce substantial biases in the interpretation of movement patterns. This study provides researchers with a framework for understanding the limitations of their data and identifies how temporal subsampling can help to reduce the influence of spatial error on their conclusions.