51 resultados para Jacob, P. L., 1806-1884.

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In just one of the many extraordinary moments during the spectacular Opening Ceremony of the 2012 London Olympic Games, thirty Mary Poppinses floated into the stadium on their umbrellas to battle a 40 foot-long inflatable Lord Voldemort. This multi-million pound extravaganza was telecast to a global audience of over one billion people, highlighting in an extremely effective manner the grandeur and eccentricities of the host nation, and featuring uniquely British icons such as Mr Bean, James Bond, The Beatles and Harry Potter, as well as those quintessential icons of Englishness, the Royal Family, double-decker red buses and the National Health Service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Given escalating rates of chronic disease, broad-reach and cost-effective interventions to increase physical activity and improve dietary intake are needed. The cost-effectiveness of a Telephone Counselling intervention to improve physical activity and diet, targeting adults with established chronic diseases in a low socio-economic area of a major Australian city was examined. Methodology/Principal Findings: A cost-effectiveness modelling study using data collected between February 2005 and November 2007 from a cluster-randomised trial that compared Telephone Counselling with a “Usual Care” (brief intervention) alternative. Economic outcomes were assessed using a state-transition Markov model, which predicted the progress of participants through five health states relating to physical activity and dietary improvement, for ten years after recruitment. The costs and health benefits of Telephone Counselling, Usual Care and an existing practice (Real Control) group were compared. Telephone Counselling compared to Usual Care was not cost-effective ($78,489 per quality adjusted life year gained). However, the Usual Care group did not represent existing practice and is not a useful comparator for decision making. Comparing Telephone Counselling outcomes to existing practice (Real Control), the intervention was found to be cost-effective ($29,375 per quality adjusted life year gained). Usual Care (brief intervention) compared to existing practice (Real Control) was also cost-effective ($12,153 per quality adjusted life year gained). Conclusions/Significance: This modelling study shows that a decision to adopt a Telephone Counselling program over existing practice (Real Control) is likely to be cost-effective. Choosing the ‘Usual Care’ brief intervention over existing practice (Real Control) shows a lower cost per quality adjusted life year, but the lack of supporting evidence for efficacy or sustainability is an important consideration for decision makers. The economics of behavioural approaches to improving health must be made explicit if decision makers are to be convinced that allocating resources toward such programs is worthwhile.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The construction industry is dynamic in nature. The concept of project success has remained ambiguously defined in the construction industry. Project success is almost the ultimate goal for every project. However, it means different things to different people. While some writers consider time, cost and quality as predominant criteria, others suggest that success is something more complex. The aim of this paper is to develop a framework for measuring success of construction projects. In this paper, a set of key performance indicators (KPIs), measured both objectively and subjectively are developed through a comprehensive literature review. The validity of the proposed KPIs is also tested by three case studies. Then, the limitations of the suggested KPIs are discussed. With the development of KPIs, a benchmark for measuring the performance of a construction project can be set. It also provides significant insights into developing a general and comprehensive base for further research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. The objective is to estimate the cost-effectiveness of an intervention that reduces hospital readmission among older people at high risk. A cost-effectiveness model to estimate the costs and health benefits of the intervention was implemented. Methodology/Principal Findings. The model used data from a randomised controlled trial conducted in an Australian tertiary metropolitan hospital. Participants were acute medical admissions aged >65 years with at least one risk factor for readmission: multiple comorbidities, impaired functionality, aged >75 years, 30 recent multiple admissions, poor social support, history of depression. The intervention was a comprehensive nursing and physiotherapy assessment and an individually tailored program of exercise strategies and nurse home visits with telephone follow-up; commencing in hospital and continuing following discharge for 24 weeks. The change to cost outcomes, including the costs of implementing the intervention and all subsequent use of health care services, and, the change to health benefits, represented by quality adjusted life years, were estimated for the intervention as compared to existing practice. The mean change to total costs and quality 38 adjusted life years for an average individual over 24 weeks participating in the intervention were: cost savings of $333 (95% Bayesian credible interval $-1,932:1,282) and 0.118 extra quality adjusted life years (95% Bayesian credible interval 0.1:0.136). The mean net41 monetary-benefit per individual for the intervention group compared to the usual care condition was $7,907 (95% Bayesian credible interval $5,959:$9,995) for the 24 week period. Conclusions/Significance. The estimation model that describes this intervention predicts cost savings and improved health outcomes. A decision to remain with existing practices causes unnecessary costs and reduced health. Decision makers should consider adopting this 46 program for elderly hospitalised patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Queensland’s legal labour disputes history does not exhibit the current trend seen in Canada and Switzerland (Gravel & Delpech, 2008) where cases citing International Labour Standards (ILS) are often successful (which is not presently the case in Queensland either). The two Queensland cases (Kuhler v. Inghams Enterprises P/L & Anor, 1997 and Bale v. Seltsam Pty Ltd, 1996) that have used ILSs were lost. Australia is a member state of the International Labour Organization (ILO) and a signatory of many ILSs. Yet, ILSs are not used in their legal capacity when compared to other international standards in other areas of law. It is important to recognize that ILSs are uniquely underutilized in labour law. Australian environmental, criminal, and industrial disputes consistently draw on international standards. Why not for the plight of workers? ILSs draw their power from supranational influence in that when a case cites an ILS the barrister or solicitor is going beyond legal precedence and into international peer pressure. An ILS can be appropriately used to highlight that Australian or Queensland legislation does not conform to a Convention or Recommendation. However, should the case deal with a breach of existing law based or modified by an ILS, citing the ILS is a good way to remind the court of its origin. It’s a new legal paradigm critically lacking in Queensland’s labour law practice. The following discusses the research methodology used in this paper. It is followed by a comparative discussion of results between the prevalence of ILSs and other international standards in Queensland case history. Finally, evidence showing the international trend of labour disputes using ILSs for victory is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impact of technology on the health and well-being of workers has been a topic of interest since computers and computerized technology were widely introduced in the 1980s. Of recent concern is the impact of rapid technological advances on individuals’ psychological well-being, especially due to advancements in mobile technology which have increased many workers’ accessibility and expected productivity. In this chapter we focus on the associations between occupational stress and technology, especially behavioral and psychological reactions. We discuss some key facilitators and barriers associated with users’ acceptance of and engagement with information and communication technology. We conclude with recommendations for ongoing research on managing occupational health and well-being in conjunction with technological advancements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter β ∈ [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter β is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more applications employ machine learning techniques in adversarial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities-the success of an attack depends largely on what types of information and influence the attacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research directions we propose represent the most important directions to pursue in the quest for secure learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of fault detection and diagnostics (FDD) is studied from the perspective of proactive testing. Unlike most research focus in the diagnosis area in which system outputs are analyzed for diagnosis purposes, in this paper the focus is on the other side of the problem: manipulating system inputs for better diagnosis reasoning. In other words, the question of how diagnostic mechanisms can direct system inputs for better diagnosis analysis is addressed here. It is shown how the problem can be formulated as decision making problem coupled with a Bayesian Network based diagnostic mechanism. The developed mechanism is applied to the problem of supervised testing in HVAC systems.