915 resultados para Springer briefs
Resumo:
The CIGRE WGs A3.20 and A3.24 identify the requirements of simulation tools to predict various stresses during the development and operational phases of medium voltage vacuum circuit breaker (VCB) testing. This paper reviews the modelling methodology [13], VCB models and tools to identify future research. It will include the application of the VCB model for the impending failure of a VCB using electro-magnetic-transient-program with diagnostic and prognostic algorithm development. The methodology developed for a VCB degradation model is to modify the dielectric equation to cover a restriking period of more than 1 millimetre.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Binary classification methods can be generalized in many ways to handle multiple classes. It turns out that not all generalizations preserve the nice property of Bayes consistency. We provide a necessary and sufficient condition for consistency which applies to a large class of multiclass classification methods. The approach is illustrated by applying it to some multiclass methods proposed in the literature.
Resumo:
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in the parameters of a Markov Decision Process (MDP). Unlike the case of an MDP, the notion of an optimal policy for a BMDP is not entirely straightforward. We consider two notions of optimality based on optimistic and pessimistic criteria. These have been analyzed for discounted BMDPs. Here we provide results for average reward BMDPs. We establish a fundamental relationship between the discounted and the average reward problems, prove the existence of Blackwell optimal policies and, for both notions of optimality, derive algorithms that converge to the optimal value function.
Resumo:
We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the “ideal” algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.
Resumo:
Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge.
Resumo:
Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion’s dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments.
Resumo:
In many prediction problems, including those that arise in computer security and computational finance, the process generating the data is best modelled as an adversary with whom the predictor competes. Even decision problems that are not inherently adversarial can be usefully modeled in this way, since the assumptions are sufficiently weak that effective prediction strategies for adversarial settings are very widely applicable.
Resumo:
In many prediction problems, including those that arise in computer security and computational finance, the process generating the data is best modelled as an adversary with whom the predictor competes. Even decision problems that are not inherently adversarial can be usefully modeled in this way, since the assumptions are sufficiently weak that effective prediction strategies for adversarial settings are very widely applicable.
Resumo:
We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.
Resumo:
Numerous tools and techniques have been developed to eliminate or reduce waste and carry out Lean concepts in the manufacturing environment. However, in practice, manufacturers encounter difficulties to clearly identify the weaknesses of the existing processes in order to address them by implementing Lean tools. Moreover, selection and implementation of appropriate Lean strategies to address the problems identified is a challenging task. According best of authors‟ knowledge, there is no method available to quantitatively evaluate the cost and benefits of implementing a Lean strategy to address the weaknesses in the manufacturing process. Therefore, benefits of Lean approaches cannot be clearly established. The authors developed a methodology to quantitatively measure the performances of a manufacturing system in detecting the causes of inefficiencies and to select appropriate Lean strategies to address the problems identified. The proposed methodology demonstrates that the Lean strategies should be implemented based on the contexts of the organization and identified problem in order to achieve maximum cost benefits. Finally, a case study has been presented to demonstrate how the procedure developed works in practical situation.
Resumo:
Lean product design has the potential to reduce the overall product development time and cost and can improve the quality of a product. However, it has been found that no or little work has been carried out to provide an integrated framework of "lean design" and to quantitatively evaluate the effectiveness of lean practices/principles in product development process. This research proposed an integrated framework for lean design process and developed a dynamic decision making tool based on Methods Time Measurement (MTM) approach for assessing the impact of lean design on the assembly process. The proposed integrated lean framework demonstrates the lean processes to be followed in the product design and assembly process in order to achieve overall leanness. The decision tool consists of a central database, the lean design guidelines, and MTM analysis. Microsoft Access and C# are utilized to develop the user interface to use the MTM analysis as decision making tool. MTM based dynamic tool is capable of estimating the assembly time, costs of parts and labour of various alternatives of a design and hence is able to achieve optimum design. A case study is conducted to test and validate the functionality of the MTM Analysis as well as to verify the lean guidelines proposed for product development.
Resumo:
This paper presents a group maintenance scheduling case study for a water distributed network. This water pipeline network presents the challenge of maintaining aging pipelines with the associated increases in annual maintenance costs. The case study focuses on developing an effective maintenance plan for the water utility. Current replacement planning is difficult as it needs to balance the replacement needs under limited budgets. A Maintenance Grouping Optimization (MGO) model based on a modified genetic algorithm was utilized to develop an optimum group maintenance schedule over a 20-year cycle. The adjacent geographical distribution of pipelines was used as a grouping criterion to control the searching space of the MGO model through a Judgment Matrix. Based on the optimum group maintenance schedule, the total cost was effectively reduced compared with the schedules without grouping maintenance jobs. This optimum result can be used as a guidance to optimize the current maintenance plan for the water utility.
Resumo:
A range of terms is used in Australian higher education institutions to describe learning approaches and teaching models that provide students with opportunities to engage in learning connected to the world of work. The umbrella term currently being used widely is Work Integrated Learning (WIL). The common aim of approaches captured under the term WIL is to integrate discipline specific knowledge learnt in university setting with that learnt in the practice of work through purposefully designed curriculum. In endeavours to extend WIL opportunities for students, universities are currently exploring authentic learning experiences, both within and outside of university settings. Some universities describe these approaches as ‘real world learning’ or ‘professional learning’. Others refer to ‘social engagement’ with the community and focus on building social capital and citizenship through curriculum design that enables students to engage with the professions through a range of learning experiences. This chapter discusses the context for, the scope, purposes, characteristics and effectiveness of WIL across Australian universities as derived from a national scoping study. This study, undertaken in response to a high level of interest in WIL, involved data collection from academic and professional staff, and students at nearly all Australian universities. Participants in the study consistently reported the benefits, especially in relation to the student learning experience. Responses highlight the importance of strong partnerships between stakeholders to facilitate effective learning outcomes and a range of issues that shape the quality of approaches and models being adopted, in promoting professional learning.