866 resultados para the Fuzzy Colour Segmentation Algorithm
Resumo:
A fuzzy-set qualitative comparative analysis is applied to determine the necessary and sufficient conditions for higher entrepreneur rates. Based on Global Entrepreneurship Monitor data, it is shown that the most relevant conditions are Media Attention to Entrepreneurship, as well as Perceived Capabilities and Perceived Opportunities. The non-existence of Fear of Failure is also an important factor in determining higher entrepreneurship rates. When the sample is split, this condition is more important for most developed countries. This can be viewed as relevant information for policymakers to better define their policies to promote entrepreneurship, which is a key to more sustainable growth in countries.
Resumo:
Zonal management in vineyards requires the prior delineation of stable yield zones within the parcel. Among the different methodologies used for zone delineation, cluster analysis of yield data from several years is one of the possibilities cited in scientific literature. However, there exist reasonable doubts concerning the cluster algorithm to be used and the number of zones that have to be delineated within a field. In this paper two different cluster algorithms have been compared (k-means and fuzzy c-means) using the grape yield data corresponding to three successive years (2002, 2003 and 2004), for a ‘Pinot Noir’ vineyard parcel. Final choice of the most recommendable algorithm has been linked to obtaining a stable pattern of spatial yield distribution and to allowing for the delineation of compact and average sized areas. The general recommendation is to use reclassified maps of two clusters or yield classes (low yield zone and high yield zone) and, consequently, the site-specific vineyard management should be based on the prior delineation of just two different zones or sub-parcels. The two tested algorithms are good options for this purpose. However, the fuzzy c-means algorithm allows for a better zoning of the parcel, forming more compact areas and with more equilibrated zonal differences over time.
Resumo:
Several equipments and methodologies have been developed to make available precision agriculture, especially considering the high cost of its implantation and sampling. An interesting possibility is to define management zones aim at dividing producing areas in smaller management zones that could be treated differently, serving as a source of recommendation and analysis. Thus, this trial used physical and chemical properties of soil and yield aiming at the generation of management zones in order to identify whether they can be used as recommendation and analysis. Management zones were generated by the Fuzzy C-Means algorithm and their evaluation was performed by calculating the reduction of variance and performing means tests. The division of the area into two management zones was considered appropriate for the present distinct averages of most soil properties and yield. The used methodology allowed the generation of management zones that can serve as source of recommendation and soil analysis; despite the relative efficiency has shown a reduced variance for all attributes in divisions in the three sub-regions, the ANOVA did not show significative differences among the management zones.
Resumo:
Atmosphärische Partikel beeinflussen das Klima durch Prozesse wie Streuung, Reflexion und Absorption. Zusätzlich fungiert ein Teil der Aerosolpartikel als Wolkenkondensationskeime (CCN), die sich auf die optischen Eigenschaften sowie die Rückstreukraft der Wolken und folglich den Strahlungshaushalt auswirken. Ob ein Aerosolpartikel Eigenschaften eines Wolkenkondensationskeims aufweist, ist vor allem von der Partikelgröße sowie der chemischen Zusammensetzung abhängig. Daher wurde die Methode der Einzelpartikel-Laserablations-Massenspektrometrie angewandt, die eine größenaufgelöste chemische Analyse von Einzelpartikeln erlaubt und zum Verständnis der ablaufenden multiphasenchemischen Prozesse innerhalb der Wolke beitragen soll.rnIm Rahmen dieser Arbeit wurde zur Charakterisierung von atmosphärischem Aerosol sowie von Wolkenresidualpartikel das Einzelpartikel-Massenspektrometer ALABAMA (Aircraft-based Laser Ablation Aerosol Mass Spectrometer) verwendet. Zusätzlich wurde zur Analyse der Partikelgröße sowie der Anzahlkonzentration ein optischer Partikelzähler betrieben. rnZur Bestimmung einer geeigneten Auswertemethode, die die Einzelpartikelmassenspektren automatisch in Gruppen ähnlich aussehender Spektren sortieren soll, wurden die beiden Algorithmen k-means und fuzzy c-means auf ihrer Richtigkeit überprüft. Es stellte sich heraus, dass beide Algorithmen keine fehlerfreien Ergebnisse lieferten, was u.a. von den Startbedingungen abhängig ist. Der fuzzy c-means lieferte jedoch zuverlässigere Ergebnisse. Darüber hinaus wurden die Massenspektren anhand auftretender charakteristischer chemischer Merkmale (Nitrat, Sulfat, Metalle) analysiert.rnIm Herbst 2010 fand die Feldkampagne HCCT (Hill Cap Cloud Thuringia) im Thüringer Wald statt, bei der die Veränderung von Aerosolpartikeln beim Passieren einer orographischen Wolke sowie ablaufende Prozesse innerhalb der Wolke untersucht wurden. Ein Vergleich der chemischen Zusammensetzung von Hintergrundaerosol und Wolkenresidualpartikeln zeigte, dass die relativen Anteile von Massenspektren der Partikeltypen Ruß und Amine für Wolkenresidualpartikel erhöht waren. Dies lässt sich durch eine gute CCN-Aktivität der intern gemischten Rußpartikel mit Nitrat und Sulfat bzw. auf einen begünstigten Übergang der Aminverbindungen aus der Gas- in die Partikelphase bei hohen relativen Luftfeuchten und tiefen Temperaturen erklären. Darüber hinaus stellte sich heraus, dass bereits mehr als 99% der Partikel des Hintergrundaerosols intern mit Nitrat und/oder Sulfat gemischt waren. Eine detaillierte Analyse des Mischungszustands der Aerosolpartikel zeigte, dass sich sowohl der Nitratgehalt als auch der Sulfatgehalt der Partikel beim Passieren der Wolke erhöhte. rn
Resumo:
The main challenges of multimedia data retrieval lie in the effective mapping between low-level features and high-level concepts, and in the individual users' subjective perceptions of multimedia content. ^ The objectives of this dissertation are to develop an integrated multimedia indexing and retrieval framework with the aim to bridge the gap between semantic concepts and low-level features. To achieve this goal, a set of core techniques have been developed, including image segmentation, content-based image retrieval, object tracking, video indexing, and video event detection. These core techniques are integrated in a systematic way to enable the semantic search for images/videos, and can be tailored to solve the problems in other multimedia related domains. In image retrieval, two new methods of bridging the semantic gap are proposed: (1) for general content-based image retrieval, a stochastic mechanism is utilized to enable the long-term learning of high-level concepts from a set of training data, such as user access frequencies and access patterns of images. (2) In addition to whole-image retrieval, a novel multiple instance learning framework is proposed for object-based image retrieval, by which a user is allowed to more effectively search for images that contain multiple objects of interest. An enhanced image segmentation algorithm is developed to extract the object information from images. This segmentation algorithm is further used in video indexing and retrieval, by which a robust video shot/scene segmentation method is developed based on low-level visual feature comparison, object tracking, and audio analysis. Based on shot boundaries, a novel data mining framework is further proposed to detect events in soccer videos, while fully utilizing the multi-modality features and object information obtained through video shot/scene detection. ^ Another contribution of this dissertation is the potential of the above techniques to be tailored and applied to other multimedia applications. This is demonstrated by their utilization in traffic video surveillance applications. The enhanced image segmentation algorithm, coupled with an adaptive background learning algorithm, improves the performance of vehicle identification. A sophisticated object tracking algorithm is proposed to track individual vehicles, while the spatial and temporal relationships of vehicle objects are modeled by an abstract semantic model. ^
Resumo:
Esta tese incide sobre o desenvolvimento de modelos computacionais e de aplicações para a gestão do lado da procura, no âmbito das redes elétricas inteligentes. É estudado o desempenho dos intervenientes da rede elétrica inteligente, sendo apresentado um modelo do produtor-consumidor doméstico. O problema de despacho económico considerando previsão de produção e consumo de energia obtidos a partir de redes neuronais artificiais é apresentado. São estudados os modelos existentes no âmbito dos programas de resposta à procura e é desenvolvida uma ferramenta computacional baseada no algoritmo de fuzzy-clustering subtrativo. São analisados perfis de consumo e modos de operação, incluindo uma breve análise da introdução do veículo elétrico e de contingências na rede de energia elétrica. São apresentadas aplicações para a gestão de energia dos consumidores no âmbito do projeto piloto InovGrid. São desenvolvidos sistemas de automação para, aquisição monitorização, controlo e supervisão do consumo a partir de dados fornecidos pelos contadores inteligente que permitem a incorporação das ações dos consumidores na gestão do consumo de energia elétrica; SMART GRIDS - COMPUTATIONAL MODELS DEVELOPMENT AND DEMAND SIDE MANAGMENT APPLICATIONS Abstract: This thesis focuses on the development of computational models and its applications on the demand side management within the smart grid scope. The performance of the electrical network players is studied and a domestic prosumer model is presented. The economic dispatch problem considering the production forecast and the energy consumption obtained from artificial neural networks is also presented. The existing demand response models are studied and a computational tool based on the fuzzy subtractive clustering algorithm is developed. Energy consumption profiles and operational modes are analyzed, including a brief analysis of the electrical vehicle and contingencies on the electrical network. Consumer energy management applications within the scope of InovGrid pilot project are presented. Computational systems are developed for the acquisition, monitoring, control and supervision of consumption data provided by smart meters allowing to incorporate consumer actions on their electrical energy management.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method
Resumo:
In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method
Resumo:
An integrated approach for multi-spectral segmentation of MR images is presented. This method is based on the fuzzy c-means (FCM) and includes bias field correction and contextual constraints over spatial intensity distribution and accounts for the non-spherical cluster's shape in the feature space. The bias field is modeled as a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of intensity are added into the FCM cost functions. To reduce the computational complexity, the contextual regularizations are separated from the clustering iterations. Since the feature space is not isotropic, distance measure adopted in Gustafson-Kessel (G-K) algorithm is used instead of the Euclidean distance, to account for the non-spherical shape of the clusters in the feature space. These algorithms are quantitatively evaluated on MR brain images using the similarity measures.
Resumo:
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively.
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
In this paper, a framework for detection of human skin in digital images is proposed. This framework is composed of a training phase and a detection phase. A skin class model is learned during the training phase by processing several training images in a hybrid and incremental fuzzy learning scheme. This scheme combines unsupervised-and supervised-learning: unsupervised, by fuzzy clustering, to obtain clusters of color groups from training images; and supervised to select groups that represent skin color. At the end of the training phase, aggregation operators are used to provide combinations of selected groups into a skin model. In the detection phase, the learned skin model is used to detect human skin in an efficient way. Experimental results show robust and accurate human skin detection performed by the proposed framework.
Resumo:
This paper presents the design and implementation of an embedded soft sensor, i. e., a generic and autonomous hardware module, which can be applied to many complex plants, wherein a certain variable cannot be directly measured. It is implemented based on a fuzzy identification algorithm called ""Limited Rules"", employed to model continuous nonlinear processes. The fuzzy model has a Takagi-Sugeno-Kang structure and the premise parameters are defined based on the Fuzzy C-Means (FCM) clustering algorithm. The firmware contains the soft sensor and it runs online, estimating the target variable from other available variables. Tests have been performed using a simulated pH neutralization plant. The results of the embedded soft sensor have been considered satisfactory. A complete embedded inferential control system is also presented, including a soft sensor and a PID controller. (c) 2007, ISA. Published by Elsevier Ltd. All rights reserved.
Resumo:
While fluoroscopy is still the most widely used imaging modality to guide cardiac interventions, the fusion of pre-operative Magnetic Resonance Imaging (MRI) with real-time intra-operative ultrasound (US) is rapidly gaining clinical acceptance as a viable, radiation-free alternative. In order to improve the detection of the left ventricular (LV) surface in 4D ultrasound, we propose to take advantage of the pre-operative MRI scans to extract a realistic geometrical model representing the patients cardiac anatomy. This could serve as prior information in the interventional setting, allowing to increase the accuracy of the anatomy extraction step in US data. We have made use of a real-time 3D segmentation framework used in the recent past to solve the LV segmentation problem in MR and US data independently and we take advantage of this common link to introduce the prior information as a soft penalty term in the ultrasound segmentation algorithm. We tested the proposed algorithm in a clinical dataset of 38 patients undergoing both MR and US scans. The introduction of the personalized shape prior improves the accuracy and robustness of the LV segmentation, as supported by the error reduction when compared to core lab manual segmentation of the same US sequences.