866 resultados para the Fuzzy Colour Segmentation Algorithm
Resumo:
We examined the relations between selection for perception and selection for action in a patient FK, with bilateral damage to his temporal and medial frontal cortices. The task required a simple grasp response to a common object (a cup) in the presence of a distractor (another cup). The target was cued by colour or location, and FK made manual responses. We examined the effects on performance of cued and uncued dimensions of both the target and the distractor. FK was impaired at perceptually selecting the target when cued by colour, when the target colour but not its location changed on successive trials. The effect was sensitive to the relative orientations of targets and distractors, indicating an effect of action selection on perceptual selection, when perceptual selection was weakly instantiated. The dimension-specific carry-over effect on reaching was enhanced when there was a temporal delay between a cue and the response, and it disappeared when there was a between-trial delay. The results indicate that perceptual and action selection systems interact to determine the efficiency with which actions are selected to particular objects.
Resumo:
The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.
Resumo:
Feeding behaviour of trained rainbow trout was investigated by the use of demand feeders, under different light conditions. The effects of the energy content of diet, and the size, colour and texture of feed pellets, on the feeding behaviour, were studied. An attempt was made to locate the assumed centres for feeding and satiety in the hypothalamus of brain by the intraperitoneal injections of goldthioglucose. Feeding under nine different constant photoperiods at 160 lux, at a temperature of 13.5°C, showed that trout exhibit a rhythmic pattern of feeding behaviour in all photoperiods except in continuous darkness.Feeding rhythms of trout attributable to the degree of gut distension were formed every eight to ten hours. Further studies by varying levels of light intensity revealed the interaction of light intensity and photoperiod. At shorter photoperiods lower levels of light intensity decreased the feeding activity in terms of food intake but by increasing the photoperiod the same feeding activity was accomplished as by the fish subject to a short photoperiod but under higher light intensity.Simulated effect of increasing and decreasing daylengths did not affect the overall food intake and growth performance. Trout are quite efficient in adjusting their food intake in terms of energy content. Colour, size and texture of feed pellets affect the feeding responses and elicit preferential food selection behaviour in trout. Goldthioglucose induced some reversable toxic effects upon general physiology of trout and did not produce any lesions in the assumed areas of feeding and satiety centres in the brain. It was concluded that the feeding behaviour of trout exhibited selective preferences according to the physical nature of food items and those preferences could be further influenced by the biotic and abiotic factors, light being one of the most important abiotic factors.
Resumo:
There are been a resurgence of interest in the neural networks field in recent years, provoked in part by the discovery of the properties of multi-layer networks. This interest has in turn raised questions about the possibility of making neural network behaviour more adaptive by automating some of the processes involved. Prior to these particular questions, the process of determining the parameters and network architecture required to solve a given problem had been a time consuming activity. A number of researchers have attempted to address these issues by automating these processes, concentrating in particular on the dynamic selection of an appropriate network architecture.The work presented here specifically explores the area of automatic architecture selection; it focuses upon the design and implementation of a dynamic algorithm based on the Back-Propagation learning algorithm. The algorithm constructs a single hidden layer as the learning process proceeds using individual pattern error as the basis of unit insertion. This algorithm is applied to several problems of differing type and complexity and is found to produce near minimal architectures that are shown to have a high level of generalisation ability.
Resumo:
We consider a variation of the prototype combinatorial optimization problem known as graph colouring. Our optimization goal is to colour the vertices of a graph with a fixed number of colours, in a way to maximize the number of different colours present in the set of nearest neighbours of each given vertex. This problem, which we pictorially call palette-colouring, has been recently addressed as a basic example of a problem arising in the context of distributed data storage. Even though it has not been proved to be NP-complete, random search algorithms find the problem hard to solve. Heuristics based on a naive belief propagation algorithm are observed to work quite well in certain conditions. In this paper, we build upon the mentioned result, working out the correct belief propagation algorithm, which needs to take into account the many-body nature of the constraints present in this problem. This method improves the naive belief propagation approach at the cost of increased computational effort. We also investigate the emergence of a satisfiable-to-unsatisfiable 'phase transition' as a function of the vertex mean degree, for different ensembles of sparse random graphs in the large size ('thermodynamic') limit.
Resumo:
Link adaptation is a critical component of IEEE 802.11 systems, which adapts transmission rates to dynamic wireless channel conditions. In this paper we investigate a general cross-layer link adaptation algorithm which jointly considers the physical layer link quality and random channel access at the MAC layer. An analytic model is proposed for the link adaptation algorithm. The underlying wireless channel is modeled with a multiple state discrete time Markov chain. Compared with the pure link quality based link adaptation algorithm, the proposed cross-layer algorithm can achieve considerable performance gains of up to 20%.
Resumo:
We investigate a digital back-propagation simplification method to enable computationally-efficient digital nonlinearity compensation for a coherently-detected 112 Gb/s polarization multiplexed quadrature phase shifted keying transmission over a 1,600 km link (20x80km) with no inline compensation. Through numerical simulation, we report up to 80% reduction in required back-propagation steps to perform nonlinear compensation, in comparison to the standard back-propagation algorithm. This method takes into account the correlation between adjacent symbols at a given instant using a weighted-average approach, and optimization of the position of nonlinear compensator stage to enable practical digital back-propagation.
Resumo:
The literature on bond markets and interest rates has focused largely on the term structure of interest rates, specifically, on the so-called expectations hypothesis. At the same time, little is known about the nature of the spread of the interest rates in the money market beyond the fact that such spreads are generally unstable. However, with the evolution of complex financial instruments, it has become imperative to identify the time series process that can help one accurately forecast such spreads into the future. This article explores the nature of the time series process underlying the spread between three-month and one-year US rates, and concludes that the movements in this spread over time is best captured by a GARCH(1,1) process. It also suggests the use of a relatively long term measure of interest rate volatility as an explanatory variable. This exercise has gained added importance in view of the revelation that GARCH based estimates of option prices consistently outperform the corresponding estimates based on the stylized Black-Scholes algorithm.
Resumo:
Renewable energy project development is highly complex and success is by no means guaranteed. Decisions are often made with approximate or uncertain information yet the current methods employed by decision-makers do not necessarily accommodate this. Levelised energy costs (LEC) are one such commonly applied measure utilised within the energy industry to assess the viability of potential projects and inform policy. The research proposes a method for achieving this by enhancing the traditional discounting LEC measure with fuzzy set theory. Furthermore, the research develops the fuzzy LEC (F-LEC) methodology to incorporate the cost of financing a project from debt and equity sources. Applied to an example bioenergy project, the research demonstrates the benefit of incorporating fuzziness for project viability, optimal capital structure and key variable sensitivity analysis decision-making. The proposed method contributes by incorporating uncertain and approximate information to the widely utilised LEC measure and by being applicable to a wide range of energy project viability decisions. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Patients with Bipolar Disorder (BD) perform poorly on tasks of selective attention and inhibitory control. Although similar behavioural deficits have been noted in their relatives, it is yet unclear whether they reflect dysfunction in the same neural circuits. We used functional magnetic resonance imaging and the Stroop Colour Word Task to compare task related neural activity between 39 euthymic BD patients, 39 of their first-degree relatives (25 with no Axis I disorders and 14 with Major Depressive Disorder) and 48 healthy controls. Compared to controls, all individuals with familial predisposition to BD, irrespective of diagnosis, showed similar reductions in neural responsiveness in regions involved in selective attention within the posterior and inferior parietal lobules. In contrast, hypoactivation within fronto-striatal regions, implicated in inhibitory control, was observed only in BD patients and MDD relatives. Although striatal deficits were comparable between BD patients and their MDD relatives, right ventrolateral prefrontal dysfunction was uniquely associated with BD. Our findings suggest that while reduced parietal engagement relates to genetic risk, fronto-striatal dysfunction reflects processes underpinning disease expression for mood disorders. © 2011 Elsevier Inc.
Resumo:
Linear programming (LP) is the most widely used optimization technique for solving real-life problems because of its simplicity and efficiency. Although conventional LP models require precise data, managers and decision makers dealing with real-world optimization problems often do not have access to exact values. Fuzzy sets have been used in the fuzzy LP (FLP) problems to deal with the imprecise data in the decision variables, objective function and/or the constraints. The imprecisions in the FLP problems could be related to (1) the decision variables; (2) the coefficients of the decision variables in the objective function; (3) the coefficients of the decision variables in the constraints; (4) the right-hand-side of the constraints; or (5) all of these parameters. In this paper, we develop a new stepwise FLP model where fuzzy numbers are considered for the coefficients of the decision variables in the objective function, the coefficients of the decision variables in the constraints and the right-hand-side of the constraints. In the first step, we use the possibility and necessity relations for fuzzy constraints without considering the fuzzy objective function. In the subsequent step, we extend our method to the fuzzy objective function. We use two numerical examples from the FLP literature for comparison purposes and to demonstrate the applicability of the proposed method and the computational efficiency of the procedures and algorithms. © 2013-IOS Press and the authors. All rights reserved.
Resumo:
Selecting the best alternative in a group decision making is a subject of many recent studies. The most popular method proposed for ranking the alternatives is based on the distance of each alternative to the ideal alternative. The ideal alternative may never exist; hence the ranking results are biased to the ideal point. The main aim in this study is to calculate a fuzzy ideal point that is more realistic to the crisp ideal point. On the other hand, recently Data Envelopment Analysis (DEA) is used to find the optimum weights for ranking the alternatives. This paper proposes a four stage approach based on DEA in the Fuzzy environment to aggregate preference rankings. An application of preferential voting system shows how the new model can be applied to rank a set of alternatives. Other two examples indicate the priority of the proposed method compared to the some other suggested methods.
Resumo:
Data Envelopment Analysis (DEA) is recognized as a modern approach to the assessment of performance of a set of homogeneous Decision Making Units (DMUs) that use similar sources to produce similar outputs. While DEA commonly is used with precise data, recently several approaches are introduced for evaluating DMUs with uncertain data. In the existing approaches many information on uncertainties are lost. For example in the defuzzification, the a-level and fuzzy ranking approaches are not considered. In the tolerance approach the inequality or equality signs are fuzzified but the fuzzy coefficients (inputs and outputs) are not treated directly. The purpose of this paper is to develop a new model to evaluate DMUs under uncertainty using Fuzzy DEA and to include a-level to the model under fuzzy environment. An example is given to illustrate this method in details.
Resumo:
The purpose of this paper is to delineate a green supply chain (GSC) performance measurement framework using an intra-organisational collaborative decision-making (CDM) approach. A fuzzy analytic network process (ANP)-based green-balanced scorecard (GrBSc) has been used within the CDM approach to assist in arriving at a consistent, accurate and timely data flow across all cross-functional areas of a business. A green causal relationship is established and linked to the fuzzy ANP approach. The causal relationship involves organisational commitment, eco-design, GSC process, social performance and sustainable performance constructs. Sub-constructs and sub-sub-constructs are also identified and linked to the causal relationship to form a network. The fuzzy ANP approach suitably handles the vagueness of the linguistics information of the CDM approach. The CDM approach is implemented in a UK-based carpet-manufacturing firm. The performance measurement approach, in addition to the traditional financial performance and accounting measures, aids in firms decision-making with regard to the overall organisational goals. The implemented approach assists the firm in identifying further requirements of the collaborative data across the supply-cain and information about customers and markets. Overall, the CDM-based GrBSc approach assists managers in deciding if the suppliers performances meet the industry and environment standards with effective human resource. © 2013 Taylor & Francis.
Resumo:
In this paper, we present syllable-based duration modelling in the context of a prosody model for Standard Yorùbá (SY) text-to-speech (TTS) synthesis applications. Our prosody model is conceptualised around a modular holistic framework. This framework is implemented using the Relational Tree (R-Tree) techniques. An important feature of our R-Tree framework is its flexibility in that it facilitates the independent implementation of the different dimensions of prosody, i.e. duration, intonation, and intensity, using different techniques and their subsequent integration. We applied the Fuzzy Decision Tree (FDT) technique to model the duration dimension. In order to evaluate the effectiveness of FDT in duration modelling, we have also developed a Classification And Regression Tree (CART) based duration model using the same speech data. Each of these models was integrated into our R-Tree based prosody model. We performed both quantitative (i.e. Root Mean Square Error (RMSE) and Correlation (Corr)) and qualitative (i.e. intelligibility and naturalness) evaluations on the two duration models. The results show that CART models the training data more accurately than FDT. The FDT model, however, shows a better ability to extrapolate from the training data since it achieved a better accuracy for the test data set. Our qualitative evaluation results show that our FDT model produces synthesised speech that is perceived to be more natural than our CART model. In addition, we also observed that the expressiveness of FDT is much better than that of CART. That is because the representation in FDT is not restricted to a set of piece-wise or discrete constant approximation. We, therefore, conclude that the FDT approach is a practical approach for duration modelling in SY TTS applications. © 2006 Elsevier Ltd. All rights reserved.