984 resultados para Effectiveness Estimation
Resumo:
Many traffic situations require drivers to cross or merge into a stream having higher priority. Gap acceptance theory enables us to model such processes to analyse traffic operation. This discussion demonstrated that numerical search fine tuned by statistical analysis can be used to determine the most likely critical gap for a sample of drivers, based on their largest rejected gap and accepted gap. This method shares some common features with the Maximum Likelihood Estimation technique (Troutbeck 1992) but lends itself well to contemporary analysis tools such as spreadsheet and is particularly analytically transparent. This method is considered not to bias estimation of critical gap due to very small rejected gaps or very large rejected gaps. However, it requires a sufficiently large sample that there is reasonable representation of largest rejected gap/accepted gap pairs within a fairly narrow highest likelihood search band.
Resumo:
Markov chain Monte Carlo (MCMC) estimation provides a solution to the complex integration problems that are faced in the Bayesian analysis of statistical problems. The implementation of MCMC algorithms is, however, code intensive and time consuming. We have developed a Python package, which is called PyMCMC, that aids in the construction of MCMC samplers and helps to substantially reduce the likelihood of coding error, as well as aid in the minimisation of repetitive code. PyMCMC contains classes for Gibbs, Metropolis Hastings, independent Metropolis Hastings, random walk Metropolis Hastings, orientational bias Monte Carlo and slice samplers as well as specific modules for common models such as a module for Bayesian regression analysis. PyMCMC is straightforward to optimise, taking advantage of the Python libraries Numpy and Scipy, as well as being readily extensible with C or Fortran.
Resumo:
Organisations are increasingly investing in complex technological innovations, such as enterprise information systems, with the aim of improving the operation of the business, and in this way gaining competitive advantage. However, the implementation of technological innovations tends to have an excessive focus on either technology innovation effectiveness, or the resulting operational effectiveness. Focusing on either one of them is detrimental to long-term performance. Cross-functional teams have been used by many organisations as a way of involving expertise from different functional areas in the implementation of technologies. The role of boundary spanning actors is discussed as they bring a common language to the cross-functional teams. Multiple regression analysis has been used to identify the structural relationships and provide an explanation for the influence of cross-functional teams, technology innovation effectiveness and operational effectiveness in the continuous improvement of operational performance. The findings indicate that cross functional teams have an indirect influence on continuous improvement of operational performance through the alignment between technology innovation effectiveness and operational effectiveness.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter β ∈ [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter β is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.
Resumo:
One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
We present a technique for estimating the 6DOF pose of a PTZ camera by tracking a single moving target in the image with known 3D position. This is useful in situations where it is not practical to measure the camera pose directly. Our application domain is estimating the pose of a PTZ camerso so that it can be used for automated GPS-based tracking and filming of UAV flight trials. We present results which show the technique is able to localize a PTZ after a short vision-tracked flight, and that the estimated pose is sufficiently accurate for the PTZ to then actively track a UAV based on GPS position data.
Resumo:
The objective of this thesis is to investigate the corporate governance attributes of smaller listed Australian firms. This study is motivated by evidence that these firms are associated with more regulatory concerns, the introduction of ASX Corporate Governance Recommendations in 2004, and a paucity of research to guide regulators and stakeholders of smaller firms. While there is an extensive body of literature examining the effectiveness of corporate governance, the literature principally focuses on larger companies, resulting in a deficiency in the understanding of the nature and effectiveness of corporate governance in smaller firms. Based on a review of agency theory literature, a theoretical model is developed that posits that agency costs are mitigated by internal governance mechanisms and transparency. The model includes external governance factors but in many smaller firms these factors are potentially absent, increasing the reliance on the internal governance mechanisms of the firm. Based on the model, the observed greater regulatory intervention in smaller companies may be due to sub-optimal internal governance practices. Accordingly, this study addresses four broad research questions (RQs). First, what is the extent and nature of the ASX Recommendations that have been adopted by smaller firms (RQ1)? Second, what firm characteristics explain differences in the recommendations adopted by smaller listed firms (RQ2), and third, what firm characteristics explain changes in the governance of smaller firms over time (RQ3)? Fourth, how effective are the corporate governance attributes of smaller firms (RQ4)? Six hypotheses are developed to address the RQs. The first two hypotheses explore the extent and nature of corporate governance, while the remaining hypotheses evaluate its effectiveness. A time-series, cross-sectional approach is used to evaluate the effectiveness of governance. Three models, based on individual governance attributes, an index of six items derived from the literature, and an index based on the full list of ASX Recommendations, are developed and tested using a sample of 298 smaller firms with annual observations over a five-year period (2002-2006) before and after the introduction of the ASX Recommendations in 2004. With respect to (RQ1) the results reveal that the overall adoption of the recommendations increased from 66 per cent in 2004 to 74 per cent in 2006. Interestingly, the adoption rate for recommendations regarding the structure of the board and formation of committees is significantly lower than the rates for other categories of recommendations. With respect to (RQ2) the results reveal that variations in rates of adoption are explained by key firm differences including, firm size, profitability, board size, audit quality, and ownership dispersion, while the results for (RQ3) were inconclusive. With respect to (RQ4), the results provide support for the association between better governance and superior accounting-based performance. In particular, the results highlight the importance of the independence of both the board and audit committee chairs, and of greater accounting-based expertise on the audit committee. In contrast, while there is little evidence that a majority independent board is associated with superior outcomes, there is evidence linking board independence with adverse audit opinion outcomes. These results suggest that board and chair independence are substitutes; in the presence of an independent chair a majority independent board may be an unnecessary and costly investment for smaller firms. The findings make several important contributions. First, the findings contribute to the literature by providing evidence on the extent, nature and effectiveness of governance in smaller firms. The findings also contribute to the policy debate regarding future development of Australia’s corporate governance code. The findings regarding board and chair independence, and audit committee characteristics, suggest that policy-makers could consider providing additional guidance for smaller companies. In general, the findings offer support for the “if not, why not?” approach of the ASX, rather than a prescriptive rules-based approach.
Resumo:
Estimates of the half-life to convergence of prices across a panel of cities are subject to bias from three potential sources: inappropriate cross-sectional aggregation of heterogeneous coefficients, presence of lagged dependent variables in a model with individual fixed effects, and time aggregation of commodity prices. This paper finds no evidence of heterogeneity bias in annual CPI data for 17 U.S. cities from 1918 to 2006, but correcting for the “Nickell bias” and time aggregation bias produces a half-life of 7.5 years, shorter than estimates from previous studies.
Resumo:
Background Providing ongoing family centred support is an integral part of childhood cancer care. For families living in regional and remote areas, opportunities to receive specialist support are limited by the availability of health care professionals and accessibility, which is often reduced due to distance, time, cost and transport. The primary aim of this work is to investigate the cost-effectiveness of videotelephony to support regional and remote families returning home for the first time with a child newly diagnosed with cancer Methods/design We will recruit 162 paediatric oncology patients and their families to a single centre randomised controlled trial. Patients from regional and remote areas, classified by Accessibility/Remoteness Index of Australia (ARIA+) greater than 0.2, will be randomised to a videotelephone support intervention or a usual support control group. Metropolitan families (ARIA+ ≤ 0.2) will be recruited as an additional usual support control group. Families allocated to the videotelephone support intervention will have access to usual support plus education, communication, counselling and monitoring with specialist multidisciplinary team members via a videotelephone service for a 12-week period following first discharge home. Families in the usual support control group will receive standard care i.e., specialist multidisciplinary team members provide support either face-to-face during inpatient stays, outpatient clinic visits or home visits, or via telephone for families who live far away from the hospital. The primary outcome measure is parental health related quality of life as measured using the Medical Outcome Survey (MOS) Short Form SF-12 measured at baseline, 4 weeks, 8 weeks and 12 weeks. The secondary outcome measures are: parental informational and emotional support; parental perceived stress, parent reported patient quality of life and parent reported sibling quality of life, parental satisfaction with care, cost of providing improved support, health care utilisation and financial burden for families. Discussion This investigation will establish the feasibility, acceptability and cost-effectiveness of using videotelephony to improve the clinical and psychosocial support provided to regional and remote paediatric oncology patients and their families.