904 resultados para Bounds
Resumo:
We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.
Resumo:
Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more applications employ machine learning techniques in adversarial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities-the success of an attack depends largely on what types of information and influence the attacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research directions we propose represent the most important directions to pursue in the quest for secure learning.
Resumo:
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of complexity. The estimates we establish give optimal rates and are based on a local and empirical version of Rademacher averages, in the sense that the Rademacher averages are computed from the data, on a subset of functions with small empirical error. We present some applications to classification and prediction with convex function classes, and with kernel classes in particular.
Resumo:
Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.
Resumo:
Online learning algorithms have recently risen to prominence due to their strong theoretical guarantees and an increasing number of practical applications for large-scale data analysis problems. In this paper, we analyze a class of online learning algorithms based on fixed potentials and nonlinearized losses, which yields algorithms with implicit update rules. We show how to efficiently compute these updates, and we prove regret bounds for the algorithms. We apply our formulation to several special cases where our approach has benefits over existing online learning methods. In particular, we provide improved algorithms and bounds for the online metric learning problem, and show improved robustness for online linear prediction problems. Results over a variety of data sets demonstrate the advantages of our framework.
Resumo:
We demonstrate a modification of the algorithm of Dani et al for the online linear optimization problem in the bandit setting, which allows us to achieve an O( \sqrt{T ln T} ) regret bound in high probability against an adaptive adversary, as opposed to the in expectation result against an oblivious adversary of Dani et al. We obtain the same dependence on the dimension as that exhibited by Dani et al. The results of this paper rest firmly on those of Dani et al and the remarkable technique of Auer et al for obtaining high-probability bounds via optimistic estimates. This paper answers an open question: it eliminates the gap between the high-probability bounds obtained in the full-information vs bandit settings.
Resumo:
In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty and interpret ‘desirable’ as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds — for hypotheses and for data — is a key distinctive feature of our approach. We show that situations exist where the more mind changes the learner is willing to accept, the less the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.
Resumo:
As the Australian Journal of Music Therapy celebrates its 20th year of publication, it is evident that the profession of music therapy in Australia, has made substantial progress over these last 20 years. Jobs are regularly advertised on the website, there is a greater public awareness of what music therapy is, there are government recognised salary awards applicable in several states of the country, working conditions have generally improved, and many Australian music therapists are recognised on the international stage as leaders in their field of expertise. You can even go to a party and tell someone you are a music therapist and there is a good chance they will say 'oh yeah, I know someone who does that at the hospital / school / community centre / nursing home' instead of saying 'oh, so like, a what?'. Despite the impressive leaps and bounds that have been made, and the success of many programs in Australia to date, there is still a great deal of room for improvement. What are the critical issues ahead for the development of music therapy in Australia? In particular, how do music therapists develop going forward and secure funding for clinical initiatives? In reflecting on this question, this article identifies two key areas, amongst the many, that can be addressed by music therapists over the next 20 years: funding and employment conditions. Examples from the national early intervention music therapy program 'Sing and Grow' are used to illustrate the potential impact of addressing these two issues on the positive development of the profession into the future.
Resumo:
We provide an algorithm that achieves the optimal regret rate in an unknown weakly communicating Markov Decision Process (MDP). The algorithm proceeds in episodes where, in each episode, it picks a policy using regularization based on the span of the optimal bias vector. For an MDP with S states and A actions whose optimal bias vector has span bounded by H, we show a regret bound of ~ O(HS p AT ). We also relate the span to various diameter-like quantities associated with the MDP, demonstrating how our results improve on previous regret bounds.
Resumo:
Most learning paradigms impose a particular syntax on the class of concepts to be learned; the chosen syntax can dramatically affect whether the class is learnable or not. For classification paradigms, where the task is to determine whether the underlying world does or does not have a particular property, how that property is represented has no implication on the power of a classifier that just outputs 1’s or 0’s. But is it possible to give a canonical syntactic representation of the class of concepts that are classifiable according to the particular criteria of a given paradigm? We provide a positive answer to this question for classification in the limit paradigms in a logical setting, with ordinal mind change bounds as a measure of complexity. The syntactic characterization that emerges enables to derive that if a possibly noncomputable classifier can perform the task assigned to it by the paradigm, then a computable classifier can also perform the same task. The syntactic characterization is strongly related to the difference hierarchy over the class of open sets of some topological space; this space is naturally defined from the class of possible worlds and possible data of the learning paradigm.
Resumo:
This work examines the effect of landmark placement on the efficiency and accuracy of risk-bounded searches over probabilistic costmaps for mobile robot path planning. In previous work, risk-bounded searches were shown to offer in excess of 70% efficiency increases over normal heuristic search methods. The technique relies on precomputing distance estimates to landmarks which are then used to produce probability distributions over exact heuristics for use in heuristic searches such as A* and D*. The location and number of these landmarks therefore influence greatly the efficiency of the search and the quality of the risk bounds. Here four new methods of selecting landmarks for risk based search are evaluated. Results are shown which demonstrate that landmark selection needs to take into account the centrality of the landmark, and that diminishing rewards are obtained from using large numbers of landmarks.
Resumo:
Statement: Jams, Jelly Beans and the Fruits of Passion Let us search, instead, for an epistemology of practice implicit in the artistic, intuitive processes which some practitioners do bring to situations of uncertainty, instability, uniqueness, and value conflict. (Schön 1983, p40) Game On was born out of the idea of creative community; finding, networking, supporting and inspiring the people behind the face of an industry, those in the mist of the machine and those intending to join. We understood this moment to be a pivotal opportunity to nurture a new emerging form of game making, in an era of change, where the old industry models were proving to be unsustainable. As soon as we started putting people into a room under pressure, to make something in 48hrs, a whole pile of evolutionary creative responses emerged. People refashioned their craft in a moment of intense creativity that demanded different ways of working, an adaptive approach to the craft of making games – small – fast – indie. An event like the 48hrs forces participants’ attention onto the process as much as the outcome. As one game industry professional taking part in a challenge for the first time observed: there are three paths in the genesis from idea to finished work: the path that focuses on mechanics; the path that focuses on team structure and roles, and the path that focuses on the idea, the spirit – and the more successful teams put the spirit of the work first and foremost. The spirit drives the adaptation, it becomes improvisation. As Schön says: “Improvisation consists on varying, combining and recombining a set of figures within the schema which bounds and gives coherence to the performance.” (1983, p55). This improvisational approach is all about those making the games: the people and the principles of their creative process. This documentation evidences the intensity of their passion, determination and the shit that they are prepared to put themselves through to achieve their goal – to win a cup full of jellybeans and make a working game in 48hrs. 48hr is a project where, on all levels, analogue meets digital. This concept was further explored through the documentation process. All of these pictures were taken with a 1945 Leica III camera. The use of this classic, film-based camera, gives the images a granularity and depth, this older slower technology exposes the very human moments of digital creativity. ____________________________ Schön, D. A. 1983, The Reflective Practitioner: How Professionals Think in Action, Basic Books, New York
Resumo:
This study examined the everyday practices of families within the context of family mealtime to investigate how members accomplished mealtime interactions. Using an ethnomethodological approach, conversation analysis and membership categorization analysis, the study investigated the interactional resources that family members used to assemble their social orders moment by moment during family mealtimes. While there is interest in mealtimes within educational policy, health research and the media, there remain few studies that provide fine-grained detail about how members produce the social activity of having a family meal. Findings from this study contribute empirical understandings about families and family mealtime. Two families with children aged 2 to 10 years were observed as they accomplished their everyday mealtime activities. Data collection took place in the family homes where family members video recorded their naturally occurring mealtimes. Each family was provided with a video camera for a one-month period and they decided which mealtimes they recorded, a method that afforded participants greater agency in the data collection process and made available to the analyst a window into the unfolding of the everyday lives of the families. A total of 14 mealtimes across the two families were recorded, capturing 347 minutes of mealtime interactions. Selected episodes from the data corpus, which includes centralised breakfast and dinnertime episodes, were transcribed using the Jeffersonian system. Three data chapters examine extended sequences of family talk at mealtimes, to show the interactional resources used by members during mealtime interactions. The first data chapter explores multiparty talk to show how the uniqueness of the occasion of having a meal influences turn design. It investigates the ways in which members accomplish two-party talk within a multiparty setting, showing how one child "tells" a funny story to accomplish the drawing together of his brothers as an audience. As well, this chapter identifies the interactional resources used by the mother to cohort her children to accomplish the choralling of grace. The second data chapter draws on sequential and categorical analysis to show how members are mapped to a locally produced membership category. The chapter shows how the mapping of members into particular categories is consequential for social order; for example, aligning members who belong to the membership category "had haircuts" and keeping out those who "did not have haircuts". Additional interactional resources such as echoing, used here to refer to the use of exactly the same words, similar prosody and physical action, and increasing physical closeness, are identified as important to the unfolding talk particularly as a way of accomplishing alignment between the grandmother and grand-daughter. The third and final data analysis chapter examines topical talk during family mealtimes. It explicates how members introduce topics of talk with an orientation to their co-participant and the way in which the take up of a topic is influenced both by the sequential environment in which it is introduced and the sensitivity of the topic. Together, these three data chapters show aspects of how family members participated in family mealtimes. The study contributes four substantive themes that emerged during the analytic process and, as such, the themes reflect what the members were observed to be doing. The first theme identified how family knowledge was relevant and consequential for initiating and sustaining interaction during mealtime with, for example, members buying into the talk of other members or being requested to help out with knowledge about a shared experience. Knowledge about members and their activities was evident with the design of questions evidencing an orientation to coparticipant’s knowledge. The second theme found how members used topic as a resource for social interaction. The third theme concerned the way in which members utilised membership categories for producing and making sense of social action. The fourth theme, evident across all episodes selected for analysis, showed how children’s competence is an ongoing interactional accomplishment as they manipulated interactional resources to manage their participation in family mealtime. The way in which children initiated interactions challenges previous understandings about children’s restricted rights as conversationalists. As well as making a theoretical contribution, the study offers methodological insight by working with families as research participants. The study shows the procedures involved as the study moved from one where the researcher undertook the decisions about what to videorecord to offering this decision making to the families, who chose when and what to videorecord of their mealtime practices. Evident also are the ways in which participants orient both to the video-camera and to the absent researcher. For the duration of the mealtime the video-camera was positioned by the adults as out of bounds to the children; however, it was offered as a "treat" to view after the mealtime was recorded. While situated within family mealtimes and reporting on the experiences of two families, this study illuminates how mealtimes are not just about food and eating; they are social. The study showed the constant and complex work of establishing and maintaining social orders and the rich array of interactional resources that members draw on during family mealtimes. The family’s interactions involved members contributing to building the social orders of family mealtime. With mealtimes occurring in institutional settings involving young children, such as long day care centres and kindergartens, the findings of this study may help educators working with young children to see the rich interactional opportunities mealtimes afford children, the interactional competence that children demonstrate during mealtimes, and the important role/s that adults may assume as co-participants in interactions with children within institutional settings.
Resumo:
Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘N’, the number of classifiers and ‘M’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.
Resumo:
This paper establishes sufficient conditions to bound the error in perturbed conditional mean estimates derived from a perturbed model (only the scalar case is shown in this paper but a similar result is expected to hold for the vector case). The results established here extend recent stability results on approximating information state filter recursions to stability results on the approximate conditional mean estimates. The presented filter stability results provide bounds for a wide variety of model error situations.