125 resultados para Set functions.
Resumo:
When searching for characteristic subpatterns in potentially noisy graph data, it appears self-evident that having multiple observations would be better than having just one. However, it turns out that the inconsistencies introduced when different graph instances have different edge sets pose a serious challenge. In this work we address this challenge for the problem of finding maximum weighted cliques. We introduce the concept of most persistent soft-clique. This is subset of vertices, that 1) is almost fully or at least densely connected, 2) occurs in all or almost all graph instances, and 3) has the maximum weight. We present a measure of clique-ness, that essentially counts the number of edge missing to make a subset of vertices into a clique. With this measure, we show that the problem of finding the most persistent soft-clique problem can be cast either as: a) a max-min two person game optimization problem, or b) a min-min soft margin optimization problem. Both formulations lead to the same solution when using a partial Lagrangian method to solve the optimization problems. By experiments on synthetic data and on real social network data we show that the proposed method is able to reliably find soft cliques in graph data, even if that is distorted by random noise or unreliable observations. Copyright 2012 by the author(s)/owner(s).
Resumo:
In a wind-turbine gearbox, planet bearings exhibit a high failure rate and are considered as one of the most critical components. Development of efficient vibration based fault detection methods for these bearings requires a thorough understanding of their vibration signature. Much work has been done to study the vibration properties of healthy planetary gear sets and to identify fault frequencies in fixed-axis bearings. However, vibration characteristics of planetary gear sets containing localized planet bearing defects (spalls or pits) have not been studied so far. In this paper, we propose a novel analytical model of a planetary gear set with ring gear flexibility and localized bearing defects as two key features. The model is used to simulate the vibration response of a planetary system in the presence of a defective planet bearing with faults on inner or outer raceway. The characteristic fault signature of a planetary bearing defect is determined and sources of modulation sidebands are identified. The findings from this work will be useful to improve existing sensor placement strategies and to develop more sophisticated fault detection algorithms. Copyright © 2011 by ASME.
Resumo:
The object of this paper is to give a complete treatment of the realizability of positive-real biquadratic impedance functions by six-element series-parallel networks comprising resistors, capacitors, and inductors. This question was studied but not fully resolved in the classical electrical circuit literature. Renewed interest in this question arises in the synthesis of passive mechanical impedances. Recent work by the authors has introduced the concept of a regular positive-real functions. It was shown that five-element networks are capable of realizing all regular and some (but not all) nonregular biquadratic positive-real functions. Accordingly, the focus of this paper is on the realizability of nonregular biquadratics. It will be shown that the only six-element series-parallel networks which are capable of realizing nonregular biquadratic impedances are those with three reactive elements or four reactive elements. We identify a set of networks that can realize all the nonregular biquadratic functions for each of the two cases. The realizability conditions for the networks are expressed in terms of a canonical form for biquadratics. The nonregular realizable region for each of the networks is explicitly characterized. © 2004-2012 IEEE.
Resumo:
The brain encodes visual information with limited precision. Contradictory evidence exists as to whether the precision with which an item is encoded depends on the number of stimuli in a display (set size). Some studies have found evidence that precision decreases with set size, but others have reported constant precision. These groups of studies differed in two ways. The studies that reported a decrease used displays with heterogeneous stimuli and tasks with a short-term memory component, while the ones that reported constancy used homogeneous stimuli and tasks that did not require short-term memory. To disentangle the effects of heterogeneity and short-memory involvement, we conducted two main experiments. In Experiment 1, stimuli were heterogeneous, and we compared a condition in which target identity was revealed before the stimulus display with one in which it was revealed afterward. In Experiment 2, target identity was fixed, and we compared heterogeneous and homogeneous distractor conditions. In both experiments, we compared an optimal-observer model in which precision is constant with set size with one in which it depends on set size. We found that precision decreases with set size when the distractors are heterogeneous, regardless of whether short-term memory is involved, but not when it is homogeneous. This suggests that heterogeneity, not short-term memory, is the critical factor. In addition, we found that precision exhibits variability across items and trials, which may partly be caused by attentional fluctuations.
Resumo:
Accurate and efficient computation of the distance function d for a given domain is important for many areas of numerical modeling. Partial differential (e.g. HamiltonJacobi type) equation based distance function algorithms have desirable computational efficiency and accuracy. In this study, as an alternative, a Poisson equation based level set (distance function) is considered and solved using the meshless boundary element method (BEM). The application of this for shape topology analysis, including the medial axis for domain decomposition, geometric de-featuring and other aspects of numerical modeling is assessed. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
Looking for a target in a visual scene becomes more difficult as the number of stimuli increases. In a signal detection theory view, this is due to the cumulative effect of noise in the encoding of the distractors, and potentially on top of that, to an increase of the noise (i.e., a decrease of precision) per stimulus with set size, reflecting divided attention. It has long been argued that human visual search behavior can be accounted for by the first factor alone. While such an account seems to be adequate for search tasks in which all distractors have the same, known feature value (i.e., are maximally predictable), we recently found a clear effect of set size on encoding precision when distractors are drawn from a uniform distribution (i.e., when they are maximally unpredictable). Here we interpolate between these two extreme cases to examine which of both conclusions holds more generally as distractor statistics are varied. In one experiment, we vary the level of distractor heterogeneity; in another we dissociate distractor homogeneity from predictability. In all conditions in both experiments, we found a strong decrease of precision with increasing set size, suggesting that precision being independent of set size is the exception rather than the rule.
Resumo:
In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed. © 2010 Michel Journée, Yurii Nesterov, Peter Richtárik and Rodolphe Sepulchre.
Resumo:
This paper proposes a design methodology to stabilize isolated relative equilibria in a model of all-to-all coupled identical particles moving in the plane at unit speed. Isolated relative equilibria correspond to either parallel motion of all particles with fixed relative spacing or to circular motion of all particles with fixed relative phases. The stabilizing feedbacks derive from Lyapunov functions that prove exponential stability and suggest almost global convergence properties. The results of the paper provide a low-order parametric family of stabilizable collectives that offer a set of primitives for the design of higher-level tasks at the group level. © 2007 IEEE.
Resumo:
This paper generalizes recent Lyapunov constructions for a cascade of two nonlinear systems, one of which is stable rather than asymptotically stable. A new cross-term construction in the Lyapunov function allows us to replace earlier growth conditions by a necessary boundedness condition. This method is instrumental in the global stabilization of feedforward systems, and new stabilization results are derived from the generalized construction.
Resumo:
When considering the potential uptake and utilization of technology management tools by industry, it must be recognized that companies face the difficult challenges of selecting, adopting and integrating individual tools into a toolkit that must be implemented within their current organizational processes and systems. This situation is compounded by the lack of sound advice on integrating well-founded individual tools into a robust toolkit that has the necessary degree of flexibility such that they can be tailored for application to specific problems faced by individual organizations. As an initial stepping stone to offering a toolkit with empirically proven utility, this paper provides a conceptual foundation to the development of toolkits by outlining an underlying philosophical position based on observations from multiple research and commercial collaborations with industry. This stance is underpinned by a set of operationalized principles that can offer guidance to organizations when deciding upon the appropriate form, functions and features that should be embodied by any potential tool/toolkit. For example, a key objective of any tool is to aid decision-making and a core set of powerful, flexible, scaleable and modular tools should be sufficient to allow users to generate, explore, shape and implement possible solutions across a wide array of strategic issues. From our philosophical stance, the preferred mode of engagement is facilitated workshops with a participatory process that enables multiple perspectives and structures the conversation through visual representations in order to manage the cognitive load in the collaborative environment. The generic form of the tools should be configurable for the given context and utilized in a lightweight manner based on the premise of 'start small and iterate fast'. © 2012 Elsevier Inc.
Resumo:
A new version of the Multi-objective Alliance Algorithm (MOAA) is described. The MOAA's performance is compared with that of NSGA-II using the epsilon and hypervolume indicators to evaluate the results. The benchmark functions chosen for the comparison are from the ZDT and DTLZ families and the main classical multi-objective (MO) problems. The results show that the new MOAA version is able to outperform NSGA-II on almost all the problems.