972 resultados para One-inclusion mistake bounds
Resumo:
Background By 2025, it is estimated that approximately 1.8 million Australian adults (approximately 8.4% of the adult population) will have diabetes, with the majority having type 2 diabetes. Weight management via improved physical activity and diet is the cornerstone of type 2 diabetes management. However, the majority of weight loss trials in diabetes have evaluated short-term, intensive clinic-based interventions that, while producing short-term outcomes, have failed to address issues of maintenance and broad population reach. Telephone-delivered interventions have the potential to address these gaps. Methods/Design Using a two-arm randomised controlled design, this study will evaluate an 18-month, telephone-delivered, behavioural weight loss intervention focussing on physical activity, diet and behavioural therapy, versus usual care, with follow-up at 24 months. Three-hundred adult participants, aged 20-75 years, with type 2 diabetes, will be recruited from 10 general practices via electronic medical records search. The Social-Cognitive Theory driven intervention involves a six-month intensive phase (4 weekly calls and 11 fortnightly calls) and a 12-month maintenance phase (one call per month). Primary outcomes, assessed at 6, 18 and 24 months, are: weight loss, physical activity, and glycaemic control (HbA1c), with weight loss and physical activity also measured at 12 months. Incremental cost-effectiveness will also be examined. Study recruitment began in February 2009, with final data collection expected by February 2013. Discussion This is the first study to evaluate the telephone as the primary method of delivering a behavioural weight loss intervention in type 2 diabetes. The evaluation of maintenance outcomes (6 months following the end of intervention), the use of accelerometers to objectively measure physical activity, and the inclusion of a cost-effectiveness analysis will advance the science of broad reach approaches to weight control and health behaviour change, and will build the evidence base needed to advocate for the translation of this work into population health practice.
Resumo:
Many luxury heritage brands operate on the misconception that heritage is interchangeable with history rather than representative of the emotional response they originally developed in their customer. This idea of heritage as static history inhibits innovation, prevents dynamic renewal and impedes their ability to redefine, strengthen and position their brand in current and emerging marketplaces. This paper examines a number of heritage luxury brands that have successfully identified the original emotional responses they developed in their customers and, through innovative approaches in design, marketing, branding and distribution evoke these responses in contemporary consumers. Using heritage and innovation hand-in-hand, these brands have continued to grow and develop a vision of heritage that incorporates both historical and contemporary ideas to meet emerging customer needs. While what constitutes a ‘luxury’ item is constantly challenged in this era of accessible luxury products, up scaling and aspirational spending, this paper sees consumers’ emotional needs as the key element in defining the concept of luxury. These emotional qualities consistently remain relevant due to their ability to enhance a positive sense of identity for the brand user. Luxury is about the ‘experience’ not just the product providing the consumer with a sense of enhanced status or identity through invoked feelings of exclusivity, authenticity, quality, uniqueness and culture. This paper will analyse luxury heritage brands that have successfully combined these emotional values with those of their ‘heritage’ to create an aura of authenticity and nostalgia that appeals to contemporary consumers. Like luxury, the line where clothing becomes fashion is blurred in the contemporary fashion industry; however, consumer emotion again plays an important role. For example, clothing becomes ‘fashion’ for consumers when it affects their self perception rather than fulfilling basic functions of shelter and protection. Successful luxury heritage brands can enhance consumers’ sense of self by involving them in the ‘experience’ and ‘personality’ of the brand so they see it as a reflection of their own exclusiveness, authentic uniqueness, belonging and cultural value. Innovation is a valuable tool for heritage luxury brands to successfully generate these desired emotional responses and meet the evolving needs of contemporary consumers. While traditionally fashion has been a monologue from brand to consumer, new technology has given consumers a voice to engage brands in a conversation to express their evolving needs, ideas and feedback. As a result, in this consumer-empowered era of information sharing, this paper defines innovation as the ability of heritage luxury brands to develop new design and branding strategies in response to this consumer feedback while retaining the emotional core values of their heritage. This paper analyses how luxury heritage brands can effectively position themselves in the contemporary marketplace by separating heritage from history to incorporate innovative strategies that will appeal to consumer needs of today and tomorrow.
Resumo:
The worldwide rise in numbers of refugees and asylum seekers suggests the need to examine the practices of those institutions charged with their resettlement in host countries. In this paper we investigate the role of one important institution – schooling – and its contribution to the successful resettlement of refugee children. We begin with an examination of forced migration and its links with globalisation, and the barriers to inclusion confronting refugees. A discussion of the educational challenges confronting individual refugee youth and schools is followed by case studies of four schools and the approaches they had developed to meet the needs of young people from a refugee background. Using our findings and other research, we outline a model of good practice in refugee education. We conclude by discussing how educational institutions might play a more active role in facilitating transitions to citizenship for refugee youth through an inclusive approach.
Resumo:
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.
Resumo:
We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.
Resumo:
We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.
Resumo:
A number of learning problems can be cast as an Online Convex Game: on each round, a learner makes a prediction x from a convex set, the environment plays a loss function f, and the learner’s long-term goal is to minimize regret. Algorithms have been proposed by Zinkevich, when f is assumed to be convex, and Hazan et al., when f is assumed to be strongly convex, that have provably low regret. We consider these two settings and analyze such games from a minimax perspective, proving minimax strategies and lower bounds in each case. These results prove that the existing algorithms are essentially optimal.