931 resultados para Bootstrap weights approach
Resumo:
In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM–test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five frequently traded stocks in the S&P 500 stock index completes the paper.
Resumo:
Abnormal event detection has attracted a lot of attention in the computer vision research community during recent years due to the increased focus on automated surveillance systems to improve security in public places. Due to the scarcity of training data and the definition of an abnormality being dependent on context, abnormal event detection is generally formulated as a data-driven approach where activities are modeled in an unsupervised fashion during the training phase. In this work, we use a Gaussian mixture model (GMM) to cluster the activities during the training phase, and propose a Gaussian mixture model based Markov random field (GMM-MRF) to estimate the likelihood scores of new videos in the testing phase. Further-more, we propose two new features: optical acceleration, and the histogram of optical flow gradients; to detect the presence of any abnormal objects and speed violations in the scene. We show that our proposed method outperforms other state of the art abnormal event detection algorithms on publicly available UCSD dataset.
Resumo:
In the past few years, there has been a steady increase in the attention, importance and focus of green initiatives related to data centers. While various energy aware measures have been developed for data centers, the requirement of improving the performance efficiency of application assignment at the same time has yet to be fulfilled. For instance, many energy aware measures applied to data centers maintain a trade-off between energy consumption and Quality of Service (QoS). To address this problem, this paper presents a novel concept of profiling to facilitate offline optimization for a deterministic application assignment to virtual machines. Then, a profile-based model is established for obtaining near-optimal allocations of applications to virtual machines with consideration of three major objectives: energy cost, CPU utilization efficiency and application completion time. From this model, a profile-based and scalable matching algorithm is developed to solve the profile-based model. The assignment efficiency of our algorithm is then compared with that of the Hungarian algorithm, which does not scale well though giving the optimal solution.
Resumo:
Driver training is one of the interventions aimed at mitigating the number of crashes that involve novice drivers. Our failure to understand what is really important for learners, in terms of risky driving, is one of the many drawbacks restraining us to build better training programs. Currently, there is a need to develop and evaluate Advanced Driving Assistance Systems that could comprehensively assess driving competencies. The aim of this paper is to present a novel Intelligent Driver Training System (IDTS) that analyses crash risks for a given driving situation, providing avenues for improvement and personalisation of driver training programs. The analysis takes into account numerous variables acquired synchronously from the Driver, the Vehicle and the Environment (DVE). The system then segments out the manoeuvres within a drive. This paper further presents the usage of fuzzy set theory to develop the safety inference rules for each manoeuvre executed during the drive. This paper presents a framework and its associated prototype that can be used to comprehensively view and assess complex driving manoeuvres and then provide a comprehensive analysis of the drive used to give feedback to novice drivers.
Resumo:
Background Multi attribute utility instruments (MAUIs) are preference-based measures that comprise a health state classification system (HSCS) and a scoring algorithm that assigns a utility value to each health state in the HSCS. When developing a MAUI from a health-related quality of life (HRQOL) questionnaire, first a HSCS must be derived. This typically involves selecting a subset of domains and items because HRQOL questionnaires typically have too many items to be amendable to the valuation task required to develop the scoring algorithm for a MAUI. Currently, exploratory factor analysis (EFA) followed by Rasch analysis is recommended for deriving a MAUI from a HRQOL measure. Aim To determine whether confirmatory factor analysis (CFA) is more appropriate and efficient than EFA to derive a HSCS from the European Organisation for the Research and Treatment of Cancer’s core HRQOL questionnaire, Quality of Life Questionnaire (QLQ-C30), given its well-established domain structure. Methods QLQ-C30 (Version 3) data were collected from 356 patients receiving palliative radiotherapy for recurrent/metastatic cancer (various primary sites). The dimensional structure of the QLQ-C30 was tested with EFA and CFA, the latter informed by the established QLQ-C30 structure and views of both patients and clinicians on which are the most relevant items. Dimensions determined by EFA or CFA were then subjected to Rasch analysis. Results CFA results generally supported the proposed QLQ-C30 structure (comparative fit index =0.99, Tucker–Lewis index =0.99, root mean square error of approximation =0.04). EFA revealed fewer factors and some items cross-loaded on multiple factors. Further assessment of dimensionality with Rasch analysis allowed better alignment of the EFA dimensions with those detected by CFA. Conclusion CFA was more appropriate and efficient than EFA in producing clinically interpretable results for the HSCS for a proposed new cancer-specific MAUI. Our findings suggest that CFA should be recommended generally when deriving a preference-based measure from a HRQOL measure that has an established domain structure.
Resumo:
Heterogeneous health data is a critical issue when managing health information for quality decision making processes. In this paper we examine the efficient aggregation of lifestyle information through a data warehousing architecture lens. We present a proof of concept for a clinical data warehouse architecture that enables evidence based decision making processes by integrating and organising disparate data silos in support of healthcare services improvement paradigms.
Resumo:
Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function (CPDF) is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures.
Resumo:
Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making
Resumo:
This study explores how preservice teachers with non-Australian educational backgrounds and prerequisite qualifications make their way into and through a local teacher education program. It is informed by Margaret Archer's sociology of reflexivity to understand the interplay between these people's personal resources and institutional constraints and enablements. Data were collected from seven participants through narrative interviews. A narrative analysis identified big and small stories. Findings show that these preservice teachers purposefully exercise their agency as they invest in a common project for a variety of transnational goals. The outcome of that project emerges from the interaction between structure and agency.
Resumo:
Critical stage in open-pit mining is to determine the optimal extraction sequence of blocks, which has significant impacts on mining profitability. In this paper, a more comprehensive block sequencing optimisation model is developed for the open-pit mines. In the model, material characteristics of blocks, grade control, excavator and block sequencing are investigated and integrated to maximise the short-term benefit of mining. Several case studies are modeled and solved by CPLEX MIP and CP engines. Numerical investigations are presented to illustrate and validate the proposed methodology.
Resumo:
Bayesian networks (BNs) are graphical probabilistic models used for reasoning under uncertainty. These models are becoming increasing popular in a range of fields including ecology, computational biology, medical diagnosis, and forensics. In most of these cases, the BNs are quantified using information from experts, or from user opinions. An interest therefore lies in the way in which multiple opinions can be represented and used in a BN. This paper proposes the use of a measurement error model to combine opinions for use in the quantification of a BN. The multiple opinions are treated as a realisation of measurement error and the model uses the posterior probabilities ascribed to each node in the BN which are computed from the prior information given by each expert. The proposed model addresses the issues associated with current methods of combining opinions such as the absence of a coherent probability model, the lack of the conditional independence structure of the BN being maintained, and the provision of only a point estimate for the consensus. The proposed model is applied an existing Bayesian Network and performed well when compared to existing methods of combining opinions.
Resumo:
In this paper, we present a dynamic model to identify influential users of micro-blogging services. Micro-blogging services, such as Twitter, allow their users (twitterers) to publish tweets and choose to follow other users to receive tweets. Previous work on user influence on Twitter, concerns more on following link structure and the contents user published, seldom emphasizes the importance of interactions among users. We argue that, by emphasizing on user actions in micro-blogging platform, user influence could be measured more accurately. Since micro-blogging is a powerful social media and communication platform, identifying influential users according to user interactions has more practical meanings, e.g., advertisers may concern how many actions – buying, in this scenario – the influential users could initiate rather than how many advertisements they spread. By introducing the idea of PageRank algorithm, innovatively, we propose our model using action-based network which could capture the ability of influential users when they interacting with micro-blogging platform. Taking the evolving prosperity of micro-blogging into consideration, we extend our actionbaseduser influence model into a dynamic one, which could distinguish influential users in different time periods. Simulation results demonstrate that our models could support and give reasonable explanations for the scenarios that we considered.
Resumo:
This research aims to explore and identify political risks on a large infrastructure project in an exaggerated environment to ascertain whether sufficient objective information can be gathered by project managers to utilise risk modelling techniques. During the study, the author proposes a new definition of political risk; performs a detailed project study of the Neelum Jhelum Hydroelectric Project in Pakistan; implements a probabilistic model using the principle of decomposition and Bayes probabilistic theorem and answers the question: was it possible for project managers to obtain all the relevant objective data to implement a probabilistic model?
Resumo:
To enhance the efficiency of regression parameter estimation by modeling the correlation structure of correlated binary error terms in quantile regression with repeated measurements, we propose a Gaussian pseudolikelihood approach for estimating correlation parameters and selecting the most appropriate working correlation matrix simultaneously. The induced smoothing method is applied to estimate the covariance of the regression parameter estimates, which can bypass density estimation of the errors. Extensive numerical studies indicate that the proposed method performs well in selecting an accurate correlation structure and improving regression parameter estimation efficiency. The proposed method is further illustrated by analyzing a dental dataset.
Resumo:
The ability of a mortgagor to lodge a caveat, where an allegation is raised of serious impropriety by a mortgagee in exercising power of sale, may be a critical protective tool for a mortgagor. The mortgagor’s capacity, as caveator, to lodge a caveat over the mortgagor’s own title and the nature of the interest necessary to support the caveat are contentious issues. This article examines the different judicial approaches that have been adopted to date and states a case for a uniform approach to this issue across Australia in the future.