986 resultados para Similarity measure


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a novel approach to building a Fuzzy Inference System (FIS) that preserves the monotonicity property is proposed. A new fuzzy re-labeling technique to re-label the consequents of fuzzy rules in the database (before the Similarity Reasoning process) and a monotonicity index for use in FIS modeling are introduced. The proposed approach is able to overcome several restrictions in our previous work that uses mathematical conditions in building monotonicity-preserving FIS models. Here, we show that the proposed approach is applicable to different FIS models, which include the zero-order Sugeno FIS and Mamdani models. Besides, the proposed approach can be extended to undertake problems related to the local monotonicity property of FIS models. A number of examples to demonstrate the usefulness of the proposed approach are presented. The results indicate the usefulness of the proposed approach in constructing monotonicity-preserving FIS models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, an Evolutionary-based Similarity Reasoning (ESR) scheme for preserving the monotonicity property of the multi-input Fuzzy Inference System (FIS) is proposed. Similarity reasoning (SR) is a useful solution for undertaking the incomplete rule base problem in FIS modeling. However, SR may not be a direct solution to designing monotonic multi-input FIS models, owing to the difficulty in getting a set of monotonically-ordered conclusions. The proposed ESR scheme, which is a synthesis of evolutionary computing, sufficient conditions, and SR, provides a useful solution to modeling and preserving the monotonicity property of multi-input FIS models. A case study on Failure Mode and Effect Analysis (FMEA) is used to demonstrate the effectiveness of the proposed ESR scheme in undertaking real world problems that require the monotonicity property of FIS models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of nonnegative blind source separation (NBSS) is addressed in this paper, where both the sources and the mixing matrix are nonnegative. Because many real-world signals are sparse, we deal with NBSS by sparse component analysis. First, a determinant-based sparseness measure, named D-measure, is introduced to gauge the temporal and spatial sparseness of signals. Based on this measure, a new NBSS model is derived, and an iterative sparseness maximization (ISM) approach is proposed to solve this model. In the ISM approach, the NBSS problem can be cast into row-to-row optimizations with respect to the unmixing matrix, and then the quadratic programming (QP) technique is used to optimize each row. Furthermore, we analyze the source identifiability and the computational complexity of the proposed ISM-QP method. The new method requires relatively weak conditions on the sources and the mixing matrix, has high computational efficiency, and is easy to implement. Simulation results demonstrate the effectiveness of our method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of the most central node within a network is one of the primary problems in network analysis. Among various centrality measures for weighted networks, most are based on the assumption that information only spreads through the shortest paths. Then, a mathematical model of an amoeboid organism has been used by Physarum centrality to relax the assumption. However, its computational complexity is relatively high by finding competing paths between all pairs of nodes in networks. In this paper, with the idea of a ground node, an improved Physarum centrality is proposed by maintaining the feature of original measure with the performance is greatly enhanced. Examples and applications are given to show the efficiency and effectiveness of our proposed measure in weighted networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background : The Beck Depression Inventory (BDI) has been frequently employed as a measure of depression in studies of obesity, with the majority of studies reporting an improvement in scores following weight loss. Given the potential similarity in obesity-related and depressive symptoms, it is uncertain whether all components of depression would improve equally with weight loss.

Method : The study included obese patients who had undergone laparoscopic adjustable gastric banding (LAGB) surgery and had completed BDIs at baseline and 1 year after surgery. Two groups of patients were included, a general background group (N = 191, mean age = 41 ± 9, mean BMI = 43 ± 8) and a group identified as experiencing elevated depressive symptoms based on BDI scores ≥23 (EDS group; (N = 67, mean age = 40 ± 9, mean BMI = 45 ± 7).

Results : Overall, BDI scores fell for both groups, background group at baseline 17 ± 9–8 ± 7 at 1 year and for the EDS group at baseline 30 ± 5–14 ± 10 at 1 year. Patient scores on the negative self-attitude subscale were significantly greater than the two other subscales and showed the greatest improvement 1 year following LAGB. Preexisting antidepressant therapy had little or no association on the BDI scores or on its change following weight loss.

Conclusion : High rates of depression are continually reported in obesity, as is a remarkable decrease in depressive symptoms following weight loss. Negative attitudes towards one’s self appears to be driving elevated BDI scores rather than the overlap in physical symptoms between obesity and depression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software similarity and classification is an emerging topic with wide applications. It is applicable to the areas of malware detection, software theft detection, plagiarism detection, and software clone detection. Extracting program features, processing those features into suitable representations, and constructing distance metrics to define similarity and dissimilarity are the key methods to identify software variants, clones, derivatives, and classes of software. Software Similarity and Classification reviews the literature of those core concepts, in addition to relevant literature in each application and demonstrates that considering these applied problems as a similarity and classification problem enables techniques to be shared between areas. Additionally, the authors present in-depth case studies using the software similarity and classification techniques developed throughout the book.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although economists have developed a series of approaches to modelling the existence of labour market discrimination, rarely is this topic examined by analysing self-report survey data. After reviewing theories and empirical models of labour market discrimination, we examine self-reported experience of discrimination at different stages in the labour market, among three racial groups utilising U.S. data from the 2001-2003 National Survey of American Life. Our findings indicate that African Americans and Caribbean blacks consistently report more experience of discrimination in the labour market than their non-Hispanic white counterparts. At different stages of the labour market, including hiring, termination and promotion, these groups are more likely to report discrimination than non-Hispanic whites. After controlling for social desirability bias and several human capital and socio-demographic covariates, the results remain robust for African Americans. However, the findings for Caribbean blacks were no longer significant after adjusting for social desirability bias. Although self-report data is rarely utilised to assess racial discrimination in labour economics, our study confirms the utility of this approach as demonstrated in similar research from other disciplines. Our results indicate that after adjusting for relevant confounders self-report survey data is a viable approach to estimating racial discrimination in the labour market. Implications of the study and directions for future research are provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presently, no consensus has been reached with regards to measuring workplace cohesion. Cohesion measures often allude to abstract concepts rather than tangible features, therefore this study identified the tangible features and specific practices that epitomize cohesive workgroups. Specifically, 28 individuals were interviewed and asked to reflect upon two workgroups in which they had been employed before, only one of which was cohesive. Participants identified tangible features, practices, or characteristics that typified each of these workgroups. Content analysis uncovered 14 features of cohesion, such as shared emotional events in the past, friendly and welcoming greetings, and a feeling of pride when other people in the team excel on some task. A provisional measure of cohesion was then distilled from these items.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A complete and monotonically-ordered fuzzy rule base is necessary to maintain the monotonicity property of a Fuzzy Inference System (FIS). In this paper, a new monotone fuzzy rule relabeling technique to relabel a non-monotone fuzzy rule base provided by domain experts is proposed. Even though the Genetic Algorithm (GA)-based monotone fuzzy rule relabeling technique has been investigated in our previous work [7], the optimality of the approach could not be guaranteed. The new fuzzy rule relabeling technique adopts a simple brute force search, and it can produce an optimal result. We also formulate a new two-stage framework that encompasses a GA-based rule selection scheme, the optimization based-Similarity Reasoning (SR) scheme, and the proposed monotone fuzzy rule relabeling technique for preserving the monotonicity property of the FIS model. Applicability of the two-stage framework to a real world problem, i.e., failure mode and effect analysis, is further demonstrated. The results clearly demonstrate the usefulness of the proposed framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Illumination and pose invariance are the most challenging aspects of face recognition. In this paper we describe a fully automatic face recognition system that uses video information to achieve illumination and pose robustness. In the proposed method, highly nonlinear manifolds of face motion are approximated using three Gaussian pose clusters. Pose robustness is achieved by comparing the corresponding pose clusters and probabilistically combining the results to derive a measure of similarity between two manifolds. Illumination is normalized on a per-pose basis. Region-based gamma intensity correction is used to correct for coarse illumination changes, while further refinement is achieved by combining a learnt linear manifold of illumination variation with constraints on face pattern distribution, derived from video. Comparative experimental evaluation is presented and the proposed method is shown to greatly outperform state-of-the-art algorithms. Consistent recognition rates of 94-100% are achieved across dramatic changes in illumination.