40 resultados para the Fuzzy Colour Segmentation Algorithm
Resumo:
Patients with Bipolar Disorder (BD) perform poorly on tasks of selective attention and inhibitory control. Although similar behavioural deficits have been noted in their relatives, it is yet unclear whether they reflect dysfunction in the same neural circuits. We used functional magnetic resonance imaging and the Stroop Colour Word Task to compare task related neural activity between 39 euthymic BD patients, 39 of their first-degree relatives (25 with no Axis I disorders and 14 with Major Depressive Disorder) and 48 healthy controls. Compared to controls, all individuals with familial predisposition to BD, irrespective of diagnosis, showed similar reductions in neural responsiveness in regions involved in selective attention within the posterior and inferior parietal lobules. In contrast, hypoactivation within fronto-striatal regions, implicated in inhibitory control, was observed only in BD patients and MDD relatives. Although striatal deficits were comparable between BD patients and their MDD relatives, right ventrolateral prefrontal dysfunction was uniquely associated with BD. Our findings suggest that while reduced parietal engagement relates to genetic risk, fronto-striatal dysfunction reflects processes underpinning disease expression for mood disorders. © 2011 Elsevier Inc.
Resumo:
Linear programming (LP) is the most widely used optimization technique for solving real-life problems because of its simplicity and efficiency. Although conventional LP models require precise data, managers and decision makers dealing with real-world optimization problems often do not have access to exact values. Fuzzy sets have been used in the fuzzy LP (FLP) problems to deal with the imprecise data in the decision variables, objective function and/or the constraints. The imprecisions in the FLP problems could be related to (1) the decision variables; (2) the coefficients of the decision variables in the objective function; (3) the coefficients of the decision variables in the constraints; (4) the right-hand-side of the constraints; or (5) all of these parameters. In this paper, we develop a new stepwise FLP model where fuzzy numbers are considered for the coefficients of the decision variables in the objective function, the coefficients of the decision variables in the constraints and the right-hand-side of the constraints. In the first step, we use the possibility and necessity relations for fuzzy constraints without considering the fuzzy objective function. In the subsequent step, we extend our method to the fuzzy objective function. We use two numerical examples from the FLP literature for comparison purposes and to demonstrate the applicability of the proposed method and the computational efficiency of the procedures and algorithms. © 2013-IOS Press and the authors. All rights reserved.
Resumo:
Selecting the best alternative in a group decision making is a subject of many recent studies. The most popular method proposed for ranking the alternatives is based on the distance of each alternative to the ideal alternative. The ideal alternative may never exist; hence the ranking results are biased to the ideal point. The main aim in this study is to calculate a fuzzy ideal point that is more realistic to the crisp ideal point. On the other hand, recently Data Envelopment Analysis (DEA) is used to find the optimum weights for ranking the alternatives. This paper proposes a four stage approach based on DEA in the Fuzzy environment to aggregate preference rankings. An application of preferential voting system shows how the new model can be applied to rank a set of alternatives. Other two examples indicate the priority of the proposed method compared to the some other suggested methods.
Resumo:
Data Envelopment Analysis (DEA) is recognized as a modern approach to the assessment of performance of a set of homogeneous Decision Making Units (DMUs) that use similar sources to produce similar outputs. While DEA commonly is used with precise data, recently several approaches are introduced for evaluating DMUs with uncertain data. In the existing approaches many information on uncertainties are lost. For example in the defuzzification, the a-level and fuzzy ranking approaches are not considered. In the tolerance approach the inequality or equality signs are fuzzified but the fuzzy coefficients (inputs and outputs) are not treated directly. The purpose of this paper is to develop a new model to evaluate DMUs under uncertainty using Fuzzy DEA and to include a-level to the model under fuzzy environment. An example is given to illustrate this method in details.
Resumo:
The purpose of this paper is to delineate a green supply chain (GSC) performance measurement framework using an intra-organisational collaborative decision-making (CDM) approach. A fuzzy analytic network process (ANP)-based green-balanced scorecard (GrBSc) has been used within the CDM approach to assist in arriving at a consistent, accurate and timely data flow across all cross-functional areas of a business. A green causal relationship is established and linked to the fuzzy ANP approach. The causal relationship involves organisational commitment, eco-design, GSC process, social performance and sustainable performance constructs. Sub-constructs and sub-sub-constructs are also identified and linked to the causal relationship to form a network. The fuzzy ANP approach suitably handles the vagueness of the linguistics information of the CDM approach. The CDM approach is implemented in a UK-based carpet-manufacturing firm. The performance measurement approach, in addition to the traditional financial performance and accounting measures, aids in firms decision-making with regard to the overall organisational goals. The implemented approach assists the firm in identifying further requirements of the collaborative data across the supply-cain and information about customers and markets. Overall, the CDM-based GrBSc approach assists managers in deciding if the suppliers performances meet the industry and environment standards with effective human resource. © 2013 Taylor & Francis.
Resumo:
In this paper, we present syllable-based duration modelling in the context of a prosody model for Standard Yorùbá (SY) text-to-speech (TTS) synthesis applications. Our prosody model is conceptualised around a modular holistic framework. This framework is implemented using the Relational Tree (R-Tree) techniques. An important feature of our R-Tree framework is its flexibility in that it facilitates the independent implementation of the different dimensions of prosody, i.e. duration, intonation, and intensity, using different techniques and their subsequent integration. We applied the Fuzzy Decision Tree (FDT) technique to model the duration dimension. In order to evaluate the effectiveness of FDT in duration modelling, we have also developed a Classification And Regression Tree (CART) based duration model using the same speech data. Each of these models was integrated into our R-Tree based prosody model. We performed both quantitative (i.e. Root Mean Square Error (RMSE) and Correlation (Corr)) and qualitative (i.e. intelligibility and naturalness) evaluations on the two duration models. The results show that CART models the training data more accurately than FDT. The FDT model, however, shows a better ability to extrapolate from the training data since it achieved a better accuracy for the test data set. Our qualitative evaluation results show that our FDT model produces synthesised speech that is perceived to be more natural than our CART model. In addition, we also observed that the expressiveness of FDT is much better than that of CART. That is because the representation in FDT is not restricted to a set of piece-wise or discrete constant approximation. We, therefore, conclude that the FDT approach is a practical approach for duration modelling in SY TTS applications. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
A key problem with IEEE 802.11 technology is adaptation of the transmission rates to the changing channel conditions, which is more challenging in vehicular networks. Although rate adaptation problem has been extensively studied for static residential and enterprise network scenarios, there is little work dedicated to the IEEE 802.11 rate adaptation in vehicular networks. Here, the authors are motivated to study the IEEE 802.11 rate adaptation problem in infrastructure-based vehicular networks. First of all, the performances of several existing rate adaptation algorithms under vehicle network scenarios, which have been widely used for static network scenarios, are evaluated. Then, a new rate adaptation algorithm is proposed to improve the network performance. In the new rate adaptation algorithm, the technique of sampling candidate transmission modes is used, and the effective throughput associated with a transmission mode is the metric used to choose among the possible transmission modes. The proposed algorithm is compared to several existing rate adaptation algorithms by simulations, which shows significant performance improvement under various system and channel configurations. An ideal signal-to-noise ratio (SNR)-based rate adaptation algorithm in which accurate channel SNR is assumed to be always available is also implemented for benchmark performance comparison.
Resumo:
Descriptions of vegetation communities are often based on vague semantic terms describing species presence and dominance. For this reason, some researchers advocate the use of fuzzy sets in the statistical classification of plant species data into communities. In this study, spatially referenced vegetation abundance values collected from Greek phrygana were analysed by ordination (DECORANA), and classified on the resulting axes using fuzzy c-means to yield a point data-set representing local memberships in characteristic plant communities. The fuzzy clusters matched vegetation communities noted in the field, which tended to grade into one another, rather than occupying discrete patches. The fuzzy set representation of the community exploited the strengths of detrended correspondence analysis while retaining richer information than a TWINSPAN classification of the same data. Thus, in the absence of phytosociological benchmarks, meaningful and manageable habitat information could be derived from complex, multivariate species data. We also analysed the influence of the reliability of different surveyors' field observations by multiple sampling at a selected sample location. We show that the impact of surveyor error was more severe in the Boolean than the fuzzy classification. © 2007 Springer.
Resumo:
An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet.
Resumo:
Non-orthogonal multiple access (NOMA) is emerging as a promising multiple access technology for the fifth generation cellular networks to address the fast growing mobile data traffic. It applies superposition coding in transmitters, allowing simultaneous allocation of the same frequency resource to multiple intra-cell users. Successive interference cancellation is used at the receivers to cancel intra-cell interference. User pairing and power allocation (UPPA) is a key design aspect of NOMA. Existing UPPA algorithms are mainly based on exhaustive search method with extensive computation complexity, which can severely affect the NOMA performance. A fast proportional fairness (PF) scheduling based UPPA algorithm is proposed to address the problem. The novel idea is to form user pairs around the users with the highest PF metrics with pre-configured fixed power allocation. Systemlevel simulation results show that the proposed algorithm is significantly faster (seven times faster for the scenario with 20 users) with a negligible throughput loss than the existing exhaustive search algorithm.