933 resultados para Return-based pricing kernel


Relevância:

40.00% 40.00%

Publicador:

Resumo:

A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multi-factor approaches to analysis of real estate returns have, since the pioneering work of Chan, Hendershott and Sanders (1990), emphasised a macro-variables approach in preference to the latent factor approach that formed the original basis of the arbitrage pricing theory. With increasing use of high frequency data and trading strategies and with a growing emphasis on the risks of extreme events, the macro-variable procedure has some deficiencies. This paper explores a third way, with the use of an alternative to the standard principal components approach – independent components analysis (ICA). ICA seeks higher moment independence and maximises in relation to a chosen risk parameter. We apply an ICA based on kurtosis maximisation to weekly US REIT data using a kurtosis maximising algorithm. The results show that ICA is successful in capturing the kurtosis characteristics of REIT returns, offering possibilities for the development of risk management strategies that are sensitive to extreme events and tail distributions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose: To quantify to what extent the new registration method, DARTEL (Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra), may reduce the smoothing kernel width required and investigate the minimum group size necessary for voxel-based morphometry (VBM) studies. Materials and Methods: A simulated atrophy approach was employed to explore the role of smoothing kernel, group size, and their interactions on VBM detection accuracy. Group sizes of 10, 15, 25, and 50 were compared for kernels between 0–12 mm. Results: A smoothing kernel of 6 mm achieved the highest atrophy detection accuracy for groups with 50 participants and 8–10 mm for the groups of 25 at P < 0.05 with familywise correction. The results further demonstrated that a group size of 25 was the lower limit when two different groups of participants were compared, whereas a group size of 15 was the minimum for longitudinal comparisons but at P < 0.05 with false discovery rate correction. Conclusion: Our data confirmed DARTEL-based VBM generally benefits from smaller kernels and different kernels perform best for different group sizes with a tendency of smaller kernels for larger groups. Importantly, the kernel selection was also affected by the threshold applied. This highlighted that the choice of kernel in relation to group size should be considered with care.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article examines the role of idiosyncratic volatility in explaining the cross-sectional variation of size- and value-sorted portfolio returns. We show that the premium for bearing idiosyncratic volatility varies inversely with the number of stocks included in the portfolios. This conclusion is robust within various multifactor models based on size, value, past performance, liquidity and total volatility and also holds within an ICAPM specification of the risk–return relationship. Our findings thus indicate that investors demand an additional return for bearing the idiosyncratic volatility of poorly-diversified portfolios.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Compared with conventional two-class learning schemes, one-class classification simply uses a single class in the classifier training phase. Applying one-class classification to learn from unbalanced data set is regarded as the recognition based learning and has shown to have the potential of achieving better performance. Similar to twoclass learning, parameter selection is a significant issue, especially when the classifier is sensitive to the parameters. For one-class learning scheme with the kernel function, such as one-class Support Vector Machine and Support Vector Data Description, besides the parameters involved in the kernel, there is another one-class specific parameter: the rejection rate v. In this paper, we proposed a general framework to involve the majority class in solving the parameter selection problem. In this framework, we first use the minority target class for training in the one-class classification stage; then we use both minority and majority class for estimating the generalization performance of the constructed classifier. This generalization performance is set as the optimization criteria. We employed the Grid search and Experiment Design search to attain various parameter settings. Experiments on UCI and Reuters text data show that the parameter optimized one-class classifiers outperform all the standard one-class learning schemes we examined.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The classification of breast cancer patients is of great importance in cancer diagnosis. Most classical cancer classification methods are clinical-based and have limited diagnostic ability. The recent advances in machine learning technique has made a great impact in cancer diagnosis. In this research, we develop a new algorithm: Kernel-Based Naive Bayes (KBNB) to classify breast cancer tumor based on memography data. The performance of the proposed algorithm is compared with that of classical navie bayes algorithm and kernel-based decision tree algorithm C4.5. The proposed algorithm is found to outperform in the both cases. We recommend the proposed algorithm could be used as a tool to classify the breast patient for early cancer diagnosis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Software reliability growth models (SRGMs) are extensively employed in software engineering to assess the reliability of software before their release for operational use. These models are usually parametric functions obtained by statistically fitting parametric curves, using Maximum Likelihood estimation or Least–squared method, to the plots of the cumulative number of failures observed N(t) against a period of systematic testing time t. Since the 1970s, a very large number of SRGMs have been proposed in the reliability and software engineering literature and these are often very complex, reflecting the involved testing regime that often took place during the software development process. In this paper we extend some of our previous work by adopting a nonparametric approach to SRGM modeling based on local polynomial modeling with kernel smoothing. These models require very few assumptions, thereby facilitating the estimation process and also rendering them more relevant under a wide variety of situations. Finally, we provide numerical examples where these models will be evaluated and compared.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objectives To establish the association between the patient's perception of fault for the crash and 12-month outcomes after non-fatal road traffic injury.Setting Two adult major trauma centres, one regional trauma centre and one metropolitan trauma centre in Victoria, Australia.Participants 2605 adult, orthopaedic trauma patients covered by the state's no-fault third party insurer for road traffic injury, injured between September 2010 and February 2014.Outcome measures EQ-5D-3L, return to work and functional recovery (Glasgow Outcome Scale—Extended score of upper good recovery) at 12 months postinjury.Results After adjusting for key confounders, the adjusted relative risk (ARR) of a functional recovery (0.57, 95% CI 0.46 to 0.69) and return to work (0.92, 95% CI 0.86 to 0.99) were lower for the not at fault compared to the at fault group. The ARR of reporting problems on EQ-5D items was 1.20–1.35 times higher in the not at fault group. Conclusions Patients who were not at fault, or denied being at fault despite a police report of fault, experienced poorer outcomes than the at fault group. Attributing fault to others was associated with poorer outcomes. Interventions to improve coping, or to resolve negative feelings from the crash, could facilitate better outcomes in the future.