41 resultados para Minimum Variance Model

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us to understand the impact of estimation error on the performance of in-sample optimal portfolios. Key Words: minimum-variance frontier; efficiency set constants; finite sample distribution

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the design of self-tuning controllers for a two terminal HVDC link. The controllers are designed utilizing a novel discrete-time converter model based on multirate sampling. The nature of converter firing system necessitates the development of a two-step ahead self-tuning control strategy. A two terminal HVDC system study has been carried out to show the effectiveness of the control strategies proposed which include the design of minimum variance controller, pole assigned controller and PLQG controller. The coordinated control of a two terminal HVDC system has been established deriving the signal from inverter end current and voltage which has been estimated based on the measurements of rectifier end quantities only realized through the robust reduced order observer. A well known scaled down sample system data has been selected for studies and the controllers designed have been tested for worst conditions. The performance of self-tuning controllers has been evaluated through digital simulation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The importance of modelling correlation has long been recognised in the field of portfolio management, with largedimensional multivariate problems increasingly becoming the focus of research. This paper provides a straightforward and commonsense approach toward investigating a number of models used to generate forecasts of the correlation matrix for large-dimensional problems.We find evidence in favour of assuming equicorrelation across various portfolio sizes, particularly during times of crisis. During periods of market calm, however, the suitability of the constant conditional correlation model cannot be discounted, especially for large portfolios. A portfolio allocation problem is used to compare forecasting methods. The global minimum variance portfolio and Model Confidence Set are used to compare methods, while portfolio weight stability and relative economic value are also considered.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The socially responsible investment (SRI) funds performances remain inconclusive. Hence, more studies need to be conducted to determine if SRI funds systematically underperform or outperform conventional funds. This paper has employed dynamic mean-variance model using shortage function approach to evaluate the performance of SRI and Environmentally friendly funds (EF). Unlike the traditional methods, this approach estimates fund performance considering both the return and risk at the same time. The empirical results show that SRI funds outperformed conventional funds in EU and US. In addition, the results of EU are among the top-performing categories. EF do not perform as well as SRI, but perform in manners equal or superior to conventional funds. These results show statistically significant in some cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the problem of using the data mining models in a real-world situation where the user can not provide all the inputs with which the predictive model is built. A learning system framework, Query Based Learning System (QBLS), is developed for improving the performance of the predictive models in practice where not all inputs are available for querying to the system. The automatic feature selection algorithm called Query Based Feature Selection (QBFS) is developed for selecting features to obtain a balance between the relative minimum subset of features and the relative maximum classification accuracy. Performance of the QBLS system and the QBFS algorithm is successfully demonstrated with a real-world application

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Harmful Algal Blooms (HABs) are a worldwide problem that have been increasing in frequency and extent over the past several decades. HABs severely damage aquatic ecosystems by destroying benthic habitat, reducing invertebrate and fish populations and affecting larger species such as dugong that rely on seagrasses for food. Few statistical models for predicting HAB occurrences have been developed, and in common with most predictive models in ecology, those that have been developed do not fully account for uncertainties in parameters and model structure. This makes management decisions based on these predictions more risky than might be supposed. We used a probit time series model and Bayesian Model Averaging (BMA) to predict occurrences of blooms of Lyngbya majuscula, a toxic cyanophyte, in Deception Bay, Queensland, Australia. We found a suite of useful predictors for HAB occurrence, with Temperature figuring prominently in models with the majority of posterior support, and a model consisting of the single covariate average monthly minimum temperature showed by far the greatest posterior support. A comparison of alternative model averaging strategies was made with one strategy using the full posterior distribution and a simpler approach that utilised the majority of the posterior distribution for predictions but with vastly fewer models. Both BMA approaches showed excellent predictive performance with little difference in their predictive capacity. Applications of BMA are still rare in ecology, particularly in management settings. This study demonstrates the power of BMA as an important management tool that is capable of high predictive performance while fully accounting for both parameter and model uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biased estimation has the advantage of reducing the mean squared error (MSE) of an estimator. The question of interest is how biased estimation affects model selection. In this paper, we introduce biased estimation to a range of model selection criteria. Specifically, we analyze the performance of the minimum description length (MDL) criterion based on biased and unbiased estimation and compare it against modern model selection criteria such as Kay's conditional model order estimator (CME), the bootstrap and the more recently proposed hook-and-loop resampling based model selection. The advantages and limitations of the considered techniques are discussed. The results indicate that, in some cases, biased estimators can slightly improve the selection of the correct model. We also give an example for which the CME with an unbiased estimator fails, but could regain its power when a biased estimator is used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To explore whether people's organ donation consent decisions occur via a reasoned and/or social reaction pathway. --------- Design: We examined prospectively students' and community members' decisions to register consent on a donor register and discuss organ donation wishes with family. --------- Method: Participants completed items assessing theory of planned behaviour (TPB; attitude, subjective norm, perceived behavioural control (PBC)), prototype/willingness model (PWM; donor prototype favourability/similarity, past behaviour), and proposed additional influences (moral norm, self-identity, recipient prototypes) for registering (N=339) and discussing (N=315) intentions/willingness. Participants self-reported their registering (N=177) and discussing (N=166) behaviour 1 month later. The utility of the (1) TPB, (2) PWM, (3) augmented TPB with PWM, and (4) augmented TPB with PWM and extensions was tested using structural equation modelling for registering and discussing intentions/willingness, and logistic regression for behaviour. --------- Results: While the TPB proved a more parsimonious model, fit indices suggested that the other proposed models offered viable options, explaining greater variance in communication intentions/willingness. The TPB, augmented TPB with PWM, and extended augmented TPB with PWM best explained registering and discussing decisions. The proposed and revised PWM also proved an adequate fit for discussing decisions. Respondents with stronger intentions (and PBC for registering) had a higher likelihood of registering and discussing. --------- Conclusions: People's decisions to communicate donation wishes may be better explained via a reasoned pathway (especially for registering); however, discussing involves more reactive elements. The role of moral norm, self-identity, and prototypes as influences predicting communication decisions were highlighted also.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloninger’s psychobiological model of temperament and character is a general model of personality that has been widely used in clinical psychology, but has seldom been applied in other domains. In this research we apply Cloninger’s model to the study of leadership. Our study comprised 81 participants who took part in a diverse range of small group tasks. Participants rotated through tasks and groups and rated each other on “emergent leadership.” As hypothesized, leader emergence tended to be consistent regardless of the specific tasks and groups. It was found that personality factors from Cloninger, Svrakic, and Przybeck’s (1993) model could explain trait-based variance in emergent leadership. Results also highlight the role of “cooperativeness” in the prediction of leadership emergence. Implications are discussed in terms of our theoretical understanding of trait-based leadership, and more generally in terms of the utility of Cloninger’s model in leadership research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background There has been increasing interest in assessing the impacts of temperature on mortality. However, few studies have used a case–crossover design to examine non-linear and distributed lag effects of temperature on mortality. Additionally, little evidence is available on the temperature-mortality relationship in China, or what temperature measure is the best predictor of mortality. Objectives To use a distributed lag non-linear model (DLNM) as a part of case–crossover design. To examine the non-linear and distributed lag effects of temperature on mortality in Tianjin, China. To explore which temperature measure is the best predictor of mortality; Methods: The DLNM was applied to a case¬−crossover design to assess the non-linear and delayed effects of temperatures (maximum, mean and minimum) on deaths (non-accidental, cardiopulmonary, cardiovascular and respiratory). Results A U-shaped relationship was consistently found between temperature and mortality. Cold effects (significantly increased mortality associated with low temperatures) were delayed by 3 days, and persisted for 10 days. Hot effects (significantly increased mortality associated with high temperatures) were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. Conclusions In Tianjin, extreme cold and hot temperatures increased the risk of mortality. Results suggest that the effects of cold last longer than the effects of heat. It is possible to combine the case−crossover design with DLNMs. This allows the case−crossover design to flexibly estimate the non-linear and delayed effects of temperature (or air pollution) whilst controlling for season.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analytical expressions are derived for the mean and variance, of estimates of the bispectrum of a real-time series assuming a cosinusoidal model. The effects of spectral leakage, inherent in discrete Fourier transform operation when the modes present in the signal have a nonintegral number of wavelengths in the record, are included in the analysis. A single phase-coupled triad of modes can cause the bispectrum to have a nonzero mean value over the entire region of computation owing to leakage. The variance of bispectral estimates in the presence of leakage has contributions from individual modes and from triads of phase-coupled modes. Time-domain windowing reduces the leakage. The theoretical expressions for the mean and variance of bispectral estimates are derived in terms of a function dependent on an arbitrary symmetric time-domain window applied to the record. the number of data, and the statistics of the phase coupling among triads of modes. The theoretical results are verified by numerical simulations for simple test cases and applied to laboratory data to examine phase coupling in a hypothesis testing framework

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective The current study evaluated part of the Multifactorial Model of Driving Safety to elucidate the relative importance of cognitive function and a limited range of standard measures of visual function in relation to the Capacity to Drive Safely. Capacity to Drive Safely was operationalized using three validated screening measures for older drivers. These included an adaptation of the well validated Useful Field of View (UFOV) and two newer measures, namely a Hazard Perception Test (HPT), and a Hazard Change Detection Task (HCDT). Method Community dwelling drivers (n = 297) aged 65–96 were assessed using a battery of measures of cognitive and visual function. Results Factor analysis of these predictor variables yielded factors including Executive/Speed, Vision (measured by visual acuity and contrast sensitivity), Spatial, Visual Closure, and Working Memory. Cognitive and Vision factors explained 83–95% of age-related variance in the Capacity to Drive Safely. Spatial and Working Memory were associated with UFOV, HPT and HCDT, Executive/Speed was associated with UFOV and HCDT and Vision was associated with HPT. Conclusion The Capacity to Drive Safely declines with chronological age, and this decline is associated with age-related declines in several higher order cognitive abilities involving manipulation and storage of visuospatial information under speeded conditions. There are also age-independent effects of cognitive function and vision that determine driving safety.