866 resultados para the Fuzzy Colour Segmentation Algorithm
Resumo:
We preserit a computational procedure to control art experimental chaotic system by applying the occasional proportional feedback (OPF) method. The method implementation uses the fuzzy theory to relate the variable correction to the necessary adjustment in the control parameter. As an application We control the chaotic attractors of the Chua circuit. We present file developed circuits and algorithms to implement this control in real time. To simplify the used procedure, we use it low resolution analog to digital converter compensated for a lowpass filter that facilitates similar applications to control other systems. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A novel methodology to assess the risk of power transformer failures caused by external faults, such as short-circuit, taking the paper insulation condition into account, is presented. The risk index is obtained by contrasting the insulation paper condition with the probability that the transformer withstands the short-circuit current flowing along the winding during an external fault. In order to assess the risk, this probability and the value of the degree of polymerization of the insulating paper are regarded as inputs of a type-2 fuzzy logic system (T2-FLS), which computes the fuzzy risk level. A Monte Carlo simulation has been used to find the survival function of the currents flowing through the transformer winding during a single-phase or a three-phase short-circuit. The Roy Billinton Test System and a real power system have been used to test the results. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The exact vibration modes and natural frequencies of planar structures and mechanisms, comprised Euler-Bernoulli beams, are obtained by solving a transcendental. nonlinear, eigenvalue problem stated by the dynamic stiffness matrix (DSM). To solve this kind of problem, the most employed technique is the Wittrick-Williams algorithm, developed in the early seventies. By formulating a new type of eigenvalue problem, which preserves the internal degrees-of-freedom for all members in the model, the present study offers an alternative to the use of this algorithm. The new proposed eigenvalue problem presents no poles, so the roots of the problem can be found by any suitable iterative numerical method. By avoiding a standard formulation for the DSM, the local mode shapes are directly calculated and any extension to the beam theory can be easily incorporated. It is shown that the method here adopted leads to exact solutions, as confirmed by various examples. Extensions of the formulation are also given, where rotary inertia, end release, skewed edges and rigid offsets are all included. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.
Resumo:
A warning system for sooty blotch and flyspeck (SBFS) of apple, developed in the southeastern United States, uses cumulative hours of leaf wetness duration (LWD) to predict the timing of the first appearance of signs. In the Upper Midwest United States, however, this warning system has resulted in sporadic disease control failures. The purpose of the present study was to determine whether the warning system`s algorithm could be modified to provide more reliable assessment of SBFS risk. Hourly LWD, rainfall, relative humidity (RH), and temperature data were collected from orchards in Iowa, North Carolina, and Wisconsin in 2005 and 2006. Timing of the first appearance of SBFS signs was determined by weekly scouting. Preliminary analysis using scatterplots and boxplots suggested that Cumulative hours of RH >= 97% could be a useful predictor of SBFS appearance. Receiver operating characteristic curve analysis was used to compare the predictive performance of cumulative LWD and cumulative hours of RH >= 97%. Cumulative hours of RH >= 97% was a more conservative and accurate predictor than cumulative LWD for 15 site years in the Upper Midwest, but not for four site years in North Carolina. Performance of the SBFS warning system in the Upper Midwest and climatically similar regions may be improved if cumulative hours of RH >= 97% were substituted for cumulative LWD to predict the first appearance of SBFS.
Resumo:
We present a photometric investigation of the variation in galaxy colour with environment in 11 X-ray-luminous clusters at 0.07 less than or equal to z less than or equal to 0.16 taken from the Las Campanas/AAT Rich Cluster Survey. We study the properties of the galaxy populations in individual clusters, and take advantage of the homogeneity of the sample to combine the clusters together to investigate weaker trends in the composite sample. We find that modal colours of galaxies lying on the colour-magnitude relation in the clusters become bluer by d(B - R)/dr(p) = -0.022 +/- 0.004 from the cluster core out to a projected radius of r(p) = 6 Mpc, further out in radius than any previous study. We also examine the variation in modal galaxy colour with local galaxy density, 2, for galaxies lying close to the colour-magnitude relation, and find that the median colour shifts bluewards by d(B - R)/d log(10)(Sigma) = -0.076 +/- 0.009 with decreasing local density across three orders of magnitude. We show that the position of the red envelope of galaxies in the colour-magnitude relation does not vary as a function of projected radius or density within the clusters, suggesting that the change in the modal colour results from an increasing fraction of bluer galaxies within the colour-magnitude relation, rather than a change in the colours of the whole population. We show that this shift in the colour-magnitude relations with projected radius and local density is greater than that expected from the changing morphological mix based on the local morphology-density relation. We therefore conclude that we are seeing a real change in the properties of galaxies on the colour-magnitude relation in the outskirts of clusters. The simplest interpretation of this result (and similar constraints in local clusters) is that an increasing fraction of galaxies in the lower density regions at large radii within clusters exhibit signatures of star formation in the recent past, signatures which are not seen in the evolved galaxies in the highest density regions.
Resumo:
A new conceptual model for soil pore-solid structure is formalized. Soil pore-solid structure is proposed to comprise spatially abutting elements each with a value which is its membership to the fuzzy set ''pore,'' termed porosity. These values have a range between zero (all solid) and unity (all pore). Images are used to represent structures in which the elements are pixels and the value of each is a porosity. Two-dimensional random fields are generated by allocating each pixel a porosity by independently sampling a statistical distribution. These random fields are reorganized into other pore-solid structural types by selecting parent points which have a specified local region of influence. Pixels of larger or smaller porosity are aggregated about the parent points and within the region of interest by controlled swapping of pixels in the image. This creates local regions of homogeneity within the random field. This is similar to the process known as simulated annealing. The resulting structures are characterized using one-and two-dimensional variograms and functions describing their connectivity. A variety of examples of structures created by the model is presented and compared. Extension to three dimensions presents no theoretical difficulties and is currently under development.
Resumo:
Human leukocyte antigen (HLA) haplotypes are frequently evaluated for population history inferences and association studies. However, the available typing techniques for the main HLA loci usually do not allow the determination of the allele phase and the constitution of a haplotype, which may be obtained by a very time-consuming and expensive family-based segregation study. Without the family-based study, computational inference by probabilistic models is necessary to obtain haplotypes. Several authors have used the expectation-maximization (EM) algorithm to determine HLA haplotypes, but high levels of erroneous inferences are expected because of the genetic distance among the main HLA loci and the presence of several recombination hotspots. In order to evaluate the efficiency of computational inference methods, 763 unrelated individuals stratified into three different datasets had their haplotypes manually defined in a family-based study of HLA-A, -B, -DRB1 and -DQB1 segregation, and these haplotypes were compared with the data obtained by the following three methods: the Expectation-Maximization (EM) and Excoffier-Laval-Balding (ELB) algorithms using the arlequin 3.11 software, and the PHASE method. When comparing the methods, we observed that all algorithms showed a poor performance for haplotype reconstruction with distant loci, estimating incorrect haplotypes for 38%-57% of the samples considering all algorithms and datasets. We suggest that computational haplotype inferences involving low-resolution HLA-A, HLA-B, HLA-DRB1 and HLA-DQB1 haplotypes should be considered with caution.
Resumo:
Este trabalho teve como objetivo utilizar a lógica fuzzy para geração de zonas de manejo, na área agrária e ambiental. Uma das aplicações consistiu da utilização do método fuzzy C-means, para geração de zonas de manejo para a cultura do mamoeiro, em um plantio comercial localizado em São Mateus-ES, com base em determinações realizadas através de amostragens e análises químicas do solo, considerando os atributos: P, K, Ca, Mg, e Saturação por bases (V%). Aplicou-se também a lógica fuzzy para desenvolver e executar um procedimento para dar suporte ao processo de tomada de decisões, envolvendo análise multicritério, gerando mapas de adequabilidade ao uso público e a conservação no Parque Estadual da Cachoeira da Fumaça, no município de Alegre-ES, considerando como fatores a localização da cachoeira, o uso do solo, os recursos hídricos, as trilhas, os locais de acessos, a infraestrutura, a declividade da área, e utilizando a abordagem de Sistema de Informações Geográficas para análise e combinação da base de dados. A partir das zonas de manejo geradas, foi possível explicar a variabilidade espacial dos atributos do solo na área de estudo da cultura do mamoeiro, e observa-se que as similaridades entre as zonas geradas, a partir de diferentes atributos, mostrou variação, mas observa-se uma influência nos dados, principalmente pelos atributos P e V. A partir do zoneamento da Unidade de Conservação foi possível selecionar áreas mais aptas ao ecoturismo, sendo encontradas próximas da cachoeira, trilhas em zonas de reflorestamento e de Mata Atlântica. Quanto às áreas propensas a medidas de conservação localizam-se próximas à cachoeira e às estruturas do parque, devido à maior pressão antrópica exercida nesses locais. Outras áreas que se destacaram, foram as áreas de pastagem, por estarem em estágio de regeneração natural. Os resultados indicam áreas de mesmo potencial de produção do mamoeiro, ou quando aplicado à área ambiental, áreas que devem receber maior cuidado para utilização por ecoturismo e para preservação e servem de base para a tomada de decisões, visando melhor aproveitamento da área.
Resumo:
Background: Precise needle puncture of renal calyces is a challenging and essential step for successful percutaneous nephrolithotomy. This work tests and evaluates, through a clinical trial, a real-time navigation system to plan and guide percutaneous kidney puncture. Methods: A novel system, entitled i3DPuncture, was developed to aid surgeons in establishing the desired puncture site and the best virtual puncture trajectory, by gathering and processing data from a tracked needle with optical passive markers. In order to navigate and superimpose the needle to a preoperative volume, the patient, 3D image data and tracker system were previously registered intraoperatively using seven points that were strategically chosen based on rigid bone structures and nearby kidney area. In addition, relevant anatomical structures for surgical navigation were automatically segmented using a multi-organ segmentation algorithm that clusters volumes based on statistical properties and minimum description length criterion. For each cluster, a rendering transfer function enhanced the visualization of different organs and surrounding tissues. Results: One puncture attempt was sufficient to achieve a successful kidney puncture. The puncture took 265 seconds, and 32 seconds were necessary to plan the puncture trajectory. The virtual puncture path was followed correctively until the needle tip reached the desired kidney calyceal. Conclusions: This new solution provided spatial information regarding the needle inside the body and the possibility to visualize surrounding organs. It may offer a promising and innovative solution for percutaneous punctures.
Resumo:
The use of iris recognition for human authentication has been spreading in the past years. Daugman has proposed a method for iris recognition, composed by four stages: segmentation, normalization, feature extraction, and matching. In this paper we propose some modifications and extensions to Daugman's method to cope with noisy images. These modifications are proposed after a study of images of CASIA and UBIRIS databases. The major modification is on the computationally demanding segmentation stage, for which we propose a faster and equally accurate template matching approach. The extensions on the algorithm address the important issue of pre-processing that depends on the image database, being mandatory when we have a non infra-red camera, like a typical WebCam. For this scenario, we propose methods for reflection removal and pupil enhancement and isolation. The tests, carried out by our C# application on grayscale CASIA and UBIRIS images show that the template matching segmentation method is more accurate and faster than the previous one, for noisy images. The proposed algorithms are found to be efficient and necessary when we deal with non infra-red images and non uniform illumination.
Resumo:
Nos tempos actuais os equipamentos para Aquecimento Ventilação e Ar Condicionado (AVAC) ocupam um lugar de grande importância na concepção, desenvolvimento e manutenção de qualquer edifício por mais pequeno que este seja. Assim, surge a necessidade premente de racionalizar os consumos energéticos optimizando-os. A alta fiabilidade desejada nestes sistemas obriga-nos cada vez mais a descobrir formas de tornar a sua manutenção mais eficiente, pelo que é necessário prevenir de uma forma proactiva todas as falhas que possam prejudicar o bom desempenho destas instalações. Como tal, torna-se necessário detectar estas falhas/anomalias, sendo imprescíndivel que nos antecipemos a estes eventos prevendo o seu acontecimento num horizonte temporal pré-definido, permitindo actuar o mais cedo possível. É neste domínio que a presente dissertação tenta encontrar soluções para que a manutenção destes equipamentos aconteça de uma forma proactiva e o mais eficazmente possível. A ideia estruturante é a de tentar intervir ainda numa fase incipiente do problema, alterando o comportamento dos equipamentos monitorizados, de uma forma automática, com recursos a agentes inteligentes de diagnóstico de falhas. No caso em estudo tenta-se adaptar de forma automática o funcionamento de uma Unidade de Tratamento de Ar (UTA) aos desvios/anomalias detectadas, promovendo a paragem integral do sistema apenas como último recurso. A arquitectura aplicada baseia-se na utilização de técnicas de inteligência artificial, nomeadamente dos sistemas multiagente. O algoritmo utilizado e testado foi construído em Labview®, utilizando um kit de ferramentas de controlo inteligente para Labview®. O sistema proposto é validado através de um simulador com o qual se conseguem reproduzir as condições reais de funcionamento de uma UTA.
Resumo:
A large area colour imager optically addressed is presented. The colour imager consists of a thin wide band gap p-i-n a-SiC:H filtering element deposited on the top of a thick large area a-SiC:H(-p)/a-Si:H(-i)/a-SiC:H(-n) image sensor, which reveals itself an intrinsic colour filter. In order to tune the external applied voltage for full colour discrimination the photocurrent generated by a modulated red light is measured under different optical and electrical bias. Results reveal that the integrated device behaves itself as an imager and a filter giving information not only on the position where the optical image is absorbed but also on it wavelength and intensity. The amplitude and sign of the image signals are electrically tuneable. In a wide range of incident fluxes and under reverse bias, the red and blue image signals are opposite in sign and the green signal is suppressed allowing blue and red colour recognition. The green information is obtained under forward bias, where the blue signal goes down to zero and the red and green remain constant. Combining the information obtained at this two applied voltages a RGB colour image picture can be acquired without the need of the usual colour filters or pixel architecture. A numerical simulation supports the colour filter analysis.
Resumo:
Value has been defined in different theoretical contexts as need, desire, interest, standard /criteria, beliefs, attitudes, and preferences. The creation of value is key to any business, and any business activity is about exchanging some tangible and/or intangible good or service and having its value accepted and rewarded by customers or clients, either inside the enterprise or collaborative network or outside. “Perhaps surprising then is that firms often do not know how to define value, or how to measure it” (Anderson and Narus, 1998 cited by [1]). Woodruff echoed that we need “richer customer value theory” for providing an “important tool for locking onto the critical things that managers need to know”. In addition, he emphasized, “we need customer value theory that delves deeply into customer’s world of product use in their situations” [2]. In this sense, we proposed and validated a novel “Conceptual Model for Decomposing the Value for the Customer”. To this end, we were aware that time has a direct impact on customer perceived value, and the suppliers’ and customers’ perceptions change from the pre-purchase to the post-purchase phases, causing some uncertainty and doubts.We wanted to break down value into all its components, as well as every built and used assets (both endogenous and/or exogenous perspectives). This component analysis was then transposed into a mathematical formulation using the Fuzzy Analytic Hierarchy Process (AHP), so that the uncertainty and vagueness of value perceptions could be embedded in this model that relates used and built assets in the tangible and intangible deliverable exchange among the involved parties, with their actual value perceptions.
Resumo:
With the electricity market liberalization, distribution and retail companies are looking for better market strategies based on adequate information upon the consumption patterns of its electricity customers. In this environment all consumers are free to choose their electricity supplier. A fair insight on the customer´s behaviour will permit the definition of specific contract aspects based on the different consumption patterns. In this paper Data Mining (DM) techniques are applied to electricity consumption data from a utility client’s database. To form the different customer´s classes, and find a set of representative consumption patterns, we have used the Two-Step algorithm which is a hierarchical clustering algorithm. Each consumer class will be represented by its load profile resulting from the clustering operation. Next, to characterize each consumer class a classification model will be constructed with the C5.0 classification algorithm.