859 resultados para Conditional expected utility
Resumo:
Background: Initiatives to promote utility cycling in countries like Australia and the US, which have low rates of utility cycling, may be more effective if they first target recreational cyclists. This study aimed to describe patterns of utility cycling and examine its correlates, among cyclists in Queensland, Australia. Methods: An online survey was administered to adult members of a state-based cycling community and advocacy group (n=1813). The survey asked about demographic characteristics and cycling behavior, motivators and constraints. Utility cycling patterns were described, and logistic regression modeling was used to examine associations between utility cycling and other variables. Results: Forty-seven percent of respondents reported utility cycling: most did so to commute (86%). Most journeys (83%) were >5 km. Being male, younger, employed full-time, or university-educated increased the likelihood of utility cycling (p<0.05). Perceiving cycling to be a cheap or a convenient form of transport were associated with utility cycling (p<0.05). Conclusions: The moderate rate of utility cycling among recreational cyclists highlights a potential to promote utility cycling among this group. To increase utility cycling, strategies should target female and older recreational cyclists and focus on making cycling a cheap and convenient mode of transport.
Resumo:
The study proposes to test the ‘IS-Impact’ index as Analytic Theory (AT). To (a) methodically evaluate the ‘relevance’ qualities of IS-Impact; namely, Utility & Intuitiveness. In so doing, to (b) document an exemplar of ‘a rigorous approach to relevance’, while (c) treating the overarching study as a higher-order case study having AT as the unit-of-analysis, and assessing adequacy of the 6 AT qualities, both for IS-Impact and for similar taxonomies. Also to (d) look beyond IS-Impact to other forms of Design Science, considering the generality of the AT qualities; and (e) further validating IS-Impact in new system organisation contexts taking account of contemporary understandings of construct theorisation, operationalization and validation.
Resumo:
We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.
Resumo:
Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.
Resumo:
One of the nice properties of kernel classifiers such as SVMs is that they often produce sparse solutions. However, the decision functions of these classifiers cannot always be used to estimate the conditional probability of the class label. We investigate the relationship between these two properties and show that these are intimately related: sparseness does not occur when the conditional probabilities can be unambiguously estimated. We consider a family of convex loss functions and derive sharp asymptotic results for the fraction of data that becomes support vectors. This enables us to characterize the exact trade-off between sparseness and the ability to estimate conditional probabilities for these loss functions.
Resumo:
Background Not all cancer patients receive state-of-the-art care and providing regular feedback to clinicians might reduce this problem. The purpose of this study was to assess the utility of various data sources in providing feedback on the quality of cancer care. Methods Published clinical practice guidelines were used to obtain a list of processes-of-care of interest to clinicians. These were assigned to one of four data categories according to their availability and the marginal cost of using them for feedback. Results Only 8 (3%) of 243 processes-of-care could be measured using population-based registry or administrative inpatient data (lowest cost). A further 119 (49%) could be measured using a core clinical registry, which contains information on important prognostic factors (e.g., clinical stage, physiological reserve, hormone-receptor status). Another 88 (36%) required an expanded clinical registry or medical record review; mainly because they concerned long-term management of disease progression (recurrences and metastases) and 28 (11.5%) required patient interview or audio-taping of consultations because they involved information sharing between clinician and patient. Conclusion The advantages of population-based cancer registries and administrative inpatient data are wide coverage and low cost. The disadvantage is that they currently contain information on only a few processes-of-care. In most jurisdictions, clinical cancer registries, which can be used to report on many more processes-of-care, do not cover smaller hospitals. If we are to provide feedback about all patients, not just those in larger academic hospitals with the most developed data systems, then we need to develop sustainable population-based data systems that capture information on prognostic factors at the time of initial diagnosis and information on management of disease progression.
Resumo:
The measurement error model is a well established statistical method for regression problems in medical sciences, although rarely used in ecological studies. While the situations in which it is appropriate may be less common in ecology, there are instances in which there may be benefits in its use for prediction and estimation of parameters of interest. We have chosen to explore this topic using a conditional independence model in a Bayesian framework using a Gibbs sampler, as this gives a great deal of flexibility, allowing us to analyse a number of different models without losing generality. Using simulations and two examples, we show how the conditional independence model can be used in ecology, and when it is appropriate.
Resumo:
In recent years a great deal of case law has been generated in relation to mortgages where the mortgagee has not engaged in adequate identity verification of the mortgagor and the mortgage has subsequently been found to be forged. As a result, careless mortgagee provisions operate in Queensland as an exception to indefeasibility. Similar provisions are expected to commence soon in New South Wales. This article examines the mortgagee’s position with the benefit of indefeasibility and then considers the impact of the careless mortgagee provisions on the rights of a mortgagee under a forged mortgage, concluding that the provisions significantly change the dynamic between a registered mortgagee and registered owner who has not signed the mortgage. These provisions appear to give the mortgagee a conditional indefeasibility, with the intention of reducing the State’s exposure to the payment of compensation in the case of identity fraud. They are however, more successful in the case of forgery by a third party rather than forgery by a co-owner.
Resumo:
We consider a robust filtering problem for uncertain discrete-time, homogeneous, first-order, finite-state hidden Markov models (HMMs). The class of uncertain HMMs considered is described by a conditional relative entropy constraint on measures perturbed from a nominal regular conditional probability distribution given the previous posterior state distribution and the latest measurement. Under this class of perturbations, a robust infinite horizon filtering problem is first formulated as a constrained optimization problem before being transformed via variational results into an unconstrained optimization problem; the latter can be elegantly solved using a risk-sensitive information-state based filtering.
Resumo:
Real-time networked control systems (NCSs) over data networks are being increasingly implemented on a massive scale in industrial applications. Along with this trend, wireless network technologies have been promoted for modern wireless NCSs (WNCSs). However, popular wireless network standards such as IEEE 802.11/15/16 are not designed for real-time communications. Key issues in real-time applications include limited transmission reliability and poor transmission delay performance. Considering the unique features of real-time control systems, this paper develops a conditional retransmission enabled transport protocol (CRETP) to improve the delay performance of the transmission control protocol (TCP) and also the reliability performance of the user datagram protocol (UDP) and its variants. Key features of the CRETP include a connectionless mechanism with acknowledgement (ACK), conditional retransmission and detection of ineffective data packets on the receiver side.
Resumo:
Previously, we have shown that foods differ markedly in the satiety that they are expected to confer (compared calorie-for-calorie). In the present study we tested the hypothesis that ‘expected satiety’ plays a causal role in the satiety that is experienced after a food has been consumed. Before lunch, participants (N = 32) were shown the ingredients of a fruit smoothie. Half were shown a small portion of fruit and half were shown a large portion. Participants then assessed the expected satiety of the smoothie and provided appetite ratings, before, and for three hours after its consumption. As anticipated, expected satiety was significantly higher in the ‘large portion’ condition. Moreover, and consistent with our hypothesis, participants reported significantly less hunger and significantly greater fullness in the large portion condition. Importantly, this effect endured throughout the test period (for three hours). Together, these findings confirm previous reports indicating that beliefs and expectations can have marked effects on satiety and they show that this effect can persist well into the inter-meal interval. Potential explanations are discussed, including the prospect that satiety is moderated by memories of expected satiety that are encoded around the time that a meal is consumed.
Resumo:
Expected satiety has been shown to play a key role in decisions around meal size. Recently it has become clear that these expectations can also influence the satiety that is experienced after a food has been consumed. As such, increasing the expected and actual satiety a food product confers without increasing its caloric content is of importance. In this study we sought to determine whether this could be achieved via product labelling. Female participants (N=75) were given a 223-kcal yoghurt smoothie for lunch. In separate conditions the smoothie was labelled as a diet brand, a highly-satiating brand, or an ‘own brand’ control. Expected satiety was assessed using rating scales and a computer-based ‘method of adjustment’, both prior to consuming the smoothie and 24 hours later. Hunger and fullness were assessed at baseline, immediately after consuming the smoothie, and for a further three hours. Despite the fact that all participants consumed the same food, the smoothie branded as highly-satiating was consistently expected to deliver more satiety than the other ‘brands’; this difference was sustained 24 hours after consumption. Furthermore, post-consumption and over three hours, participants consuming this smoothie reported significantly less hunger and significantly greater fullness. These findings demonstrate that the satiety that a product confers depends in part on information that is present around the time of consumption. We suspect that this process is mediated by changes to expected satiety. These effects may potentially be utilised in the development of successful weight-management products.