909 resultados para k-Means algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Courvisanos J., Jain A. and Mardaneh K. Economic resilience of regions under crises: a study of the Australian economy, Regional Studies. Identifying patterns of economic resilience in regions by industry categories is the focus of this paper. Patterns emerge from adaptive capacity in four distinct functional groups of local government regions in Australia, in respect of their resilience from shocks on specific industries. A model of regional adaptive cycles around four sequential phases – reorganization, exploitation, conservation and release – is adopted as the framework for recognizing such patterns. A data-mining method utilizes a k-means algorithm to evaluate the impact of two major shocks – a 13-year drought and the Global Financial Crisis – on four functional groups of regions, using census data from 2001, 2006 and 2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of the maps obtained from remote sensing orbital images submitted to digital processing became fundamental to optimize conservation and monitoring actions of the coral reefs. However, the accuracy reached in the mapping of submerged areas is limited by variation of the water column that degrades the signal received by the orbital sensor and introduces errors in the final result of the classification. The limited capacity of the traditional methods based on conventional statistical techniques to solve the problems related to the inter-classes took the search of alternative strategies in the area of the Computational Intelligence. In this work an ensemble classifiers was built based on the combination of Support Vector Machines and Minimum Distance Classifier with the objective of classifying remotely sensed images of coral reefs ecosystem. The system is composed by three stages, through which the progressive refinement of the classification process happens. The patterns that received an ambiguous classification in a certain stage of the process were revalued in the subsequent stage. The prediction non ambiguous for all the data happened through the reduction or elimination of the false positive. The images were classified into five bottom-types: deep water; under-water corals; inter-tidal corals; algal and sandy bottom. The highest overall accuracy (89%) was obtained from SVM with polynomial kernel. The accuracy of the classified image was compared through the use of error matrix to the results obtained by the application of other classification methods based on a single classifier (neural network and the k-means algorithm). In the final, the comparison of results achieved demonstrated the potential of the ensemble classifiers as a tool of classification of images from submerged areas subject to the noise caused by atmospheric effects and the water column

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image segmentation is one of the image processing problems that deserves special attention from the scientific community. This work studies unsupervised methods to clustering and pattern recognition applicable to medical image segmentation. Natural Computing based methods have shown very attractive in such tasks and are studied here as a way to verify it's applicability in medical image segmentation. This work treats to implement the following methods: GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm), PSOKA (PSO and K-means based Clustering Algorithm) and PSOFCM (PSO and FCM based Clustering Algorithm). Besides, as a way to evaluate the results given by the algorithms, clustering validity indexes are used as quantitative measure. Visual and qualitative evaluations are realized also, mainly using data given by the BrainWeb brain simulator as ground truth

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Navigation based on visual feedback for robots, working in a closed environment, can be obtained settling a camera in each robot (local vision system). However, this solution requests a camera and capacity of local processing for each robot. When possible, a global vision system is a cheapest solution for this problem. In this case, one or a little amount of cameras, covering all the workspace, can be shared by the entire team of robots, saving the cost of a great amount of cameras and the associated processing hardware needed in a local vision system. This work presents the implementation and experimental results of a global vision system for mobile mini-robots, using robot soccer as test platform. The proposed vision system consists of a camera, a frame grabber and a computer (PC) for image processing. The PC is responsible for the team motion control, based on the visual feedback, sending commands to the robots through a radio link. In order for the system to be able to unequivocally recognize each robot, each one has a label on its top, consisting of two colored circles. Image processing algorithms were developed for the eficient computation, in real time, of all objects position (robot and ball) and orientation (robot). A great problem found was to label the color, in real time, of each colored point of the image, in time-varying illumination conditions. To overcome this problem, an automatic camera calibration, based on clustering K-means algorithm, was implemented. This method guarantees that similar pixels will be clustered around a unique color class. The obtained experimental results shown that the position and orientation of each robot can be obtained with a precision of few millimeters. The updating of the position and orientation was attained in real time, analyzing 30 frames per second

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O avanço nas áreas de comunicação sem fio e microeletrônica permite o desenvolvimento de equipamentos micro sensores com capacidade de monitorar grandes regiões. Formadas por milhares de nós sensores, trabalhando de forma colaborativa, as Redes de Sensores sem Fio apresentam severas restrições de energia, devido à capacidade limitada das baterias dos nós que compõem a rede. O consumo de energia pode ser minimizado, permitindo que apenas alguns nós especiais, chamados de Cluster Head, sejam responsáveis por receber os dados dos nós que formam seu cluster e propagar estes dados para um ponto de coleta denominado Estação Base. A escolha do Cluster Head ideal influencia no aumento do período de estabilidade da rede, maximizando seu tempo de vida útil. A proposta, apresentada nesta dissertação, utiliza Lógica Fuzzy e algoritmo k-means com base em informações centralizadas na Estação Base para eleição do Cluster Head ideal em Redes de Sensores sem Fio heterogêneas. Os critérios usados para seleção do Cluster Head são baseados na centralidade do nó, nível de energia e proximidade para a Estação Base. Esta dissertação apresenta as desvantagens de utilização de informações locais para eleição do líder do cluster e a importância do tratamento discriminatório sobre as discrepâncias energéticas dos nós que formam a rede. Esta proposta é comparada com os algoritmos Low Energy Adaptative Clustering Hierarchy (LEACH) e Distributed energy-efficient clustering algorithm for heterogeneous Wireless sensor networks (DEEC). Esta comparação é feita, utilizando o final do período de estabilidade, como também, o tempo de vida útil da rede.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work was to typify, through physicochemical parameters, honey from Campos do Jordão’s microrregion, and verify how samples are grouped in accordance with the climatic production seasonality (summer and winter). It were assessed 30 samples of honey from beekeepers located in the cities of Monteiro Lobato, Campos do Jordão, Santo Antonio do Pinhal e São Bento do Sapucaí-SP, regarding both periods of honey production (November to February; July to September, during 2007 and 2008; n = 30). Samples were submitted to physicochemical analysis of total acidity, pH, humidity, water activity, density, aminoacids, ashes, color and electrical conductivity, identifying physicochemical standards of honey samples from both periods of production. Next, we carried out a cluster analysis of data using k-means algorithm, which grouped the samples into two classes (summer and winter). Thus, there was a supervised training of an Artificial Neural Network (ANN) using backpropagation algorithm. According to the analysis, the knowledge gained through the ANN classified the samples with 80% accuracy. It was observed that the ANNs have proved an effective tool to group samples of honey of the region of Campos do Jordao according to their physicochemical characteristics, depending on the different production periods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently there has been a considerable interest in dynamic textures due to the explosive growth of multimedia databases. In addition, dynamic texture appears in a wide range of videos, which makes it very important in applications concerning to model physical phenomena. Thus, dynamic textures have emerged as a new field of investigation that extends the static or spatial textures to the spatio-temporal domain. In this paper, we propose a novel approach for dynamic texture segmentation based on automata theory and k-means algorithm. In this approach, a feature vector is extracted for each pixel by applying deterministic partially self-avoiding walks on three orthogonal planes of the video. Then, these feature vectors are clustered by the well-known k-means algorithm. Although the k-means algorithm has shown interesting results, it only ensures its convergence to a local minimum, which affects the final result of segmentation. In order to overcome this drawback, we compare six methods of initialization of the k-means. The experimental results have demonstrated the effectiveness of our proposed approach compared to the state-of-the-art segmentation methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A non-hierarchical K-means algorithm is used to cluster 47 years (1960–2006) of 10-day HYSPLIT backward trajectories to the Pico Mountain (PM) observatory on a seasonal basis. The resulting cluster centers identify the major transport pathways and collectively comprise a long-term climatology of transport to the observatory. The transport climatology improves our ability to interpret the observations made there and our understanding of pollution source regions to the station and the central North Atlantic region. I determine which pathways dominate transport to the observatory and examine the impacts of these transport patterns on the O3, NOy, NOx, and CO measurements made there during 2001–2006. Transport from the U.S., Canada, and the Atlantic most frequently reaches the station, but Europe, east Africa, and the Pacific can also contribute significantly depending on the season. Transport from Canada was correlated with the North Atlantic Oscillation (NAO) in spring and winter, and transport from the Pacific was uncorrelated with the NAO. The highest CO and O3 are observed during spring. Summer is also characterized by high CO and O3 and the highest NOy and NOx of any season. Previous studies at the station attributed the summer time high CO and O3 to transport of boreal wildfire emissions (for 2002–2004), and boreal fires continued to affect the station during 2005 and 2006. The particle dispersion model FLEXPART was used to calculate anthropogenic and biomass-burning CO tracer values at the station in an attempt to identify the regions responsible for the high CO and O3 observations during spring and biomass-burning impacts in summer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Magnetic resonance temperature imaging (MRTI) is recognized as a noninvasive means to provide temperature imaging for guidance in thermal therapies. The most common method of estimating temperature changes in the body using MR is by measuring the water proton resonant frequency (PRF) shift. Calculation of the complex phase difference (CPD) is the method of choice for measuring the PRF indirectly since it facilitates temperature mapping with high spatiotemporal resolution. Chemical shift imaging (CSI) techniques can provide the PRF directly with high sensitivity to temperature changes while minimizing artifacts commonly seen in CPD techniques. However, CSI techniques are currently limited by poor spatiotemporal resolution. This research intends to develop and validate a CSI-based MRTI technique with intentional spectral undersampling which allows relaxed parameters to improve spatiotemporal resolution. An algorithm based on autoregressive moving average (ARMA) modeling is developed and validated to help overcome limitations of Fourier-based analysis allowing highly accurate and precise PRF estimates. From the determined acquisition parameters and ARMA modeling, robust maps of temperature using the k-means algorithm are generated and validated in laser treatments in ex vivo tissue. The use of non-PRF based measurements provided by the technique is also investigated to aid in the validation of thermal damage predicted by an Arrhenius rate dose model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a test for identifying clusters in high dimensional data based on the k-means algorithm when the null hypothesis is spherical normal. We show that projection techniques used for evaluating validity of clusters may be misleading for such data. In particular, we demonstrate that increasingly well-separated clusters are identified as the dimensionality increases, when no such clusters exist. Furthermore, in a case of true bimodality, increasing the dimensionality makes identifying the correct clusters more difficult. In addition to the original conservative test, we propose a practical test with the same asymptotic behavior that performs well for a moderate number of points and moderate dimensionality. ACM Computing Classification System (1998): I.5.3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary aim of this dissertation is to develop data mining tools for knowledge discovery in biomedical data when multiple (homogeneous or heterogeneous) sources of data are available. The central hypothesis is that, when information from multiple sources of data are used appropriately and effectively, knowledge discovery can be better achieved than what is possible from only a single source. ^ Recent advances in high-throughput technology have enabled biomedical researchers to generate large volumes of diverse types of data on a genome-wide scale. These data include DNA sequences, gene expression measurements, and much more; they provide the motivation for building analysis tools to elucidate the modular organization of the cell. The challenges include efficiently and accurately extracting information from the multiple data sources; representing the information effectively, developing analytical tools, and interpreting the results in the context of the domain. ^ The first part considers the application of feature-level integration to design classifiers that discriminate between soil types. The machine learning tools, SVM and KNN, were used to successfully distinguish between several soil samples. ^ The second part considers clustering using multiple heterogeneous data sources. The resulting Multi-Source Clustering (MSC) algorithm was shown to have a better performance than clustering methods that use only a single data source or a simple feature-level integration of heterogeneous data sources. ^ The third part proposes a new approach to effectively incorporate incomplete data into clustering analysis. Adapted from K-means algorithm, the Generalized Constrained Clustering (GCC) algorithm makes use of incomplete data in the form of constraints to perform exploratory analysis. Novel approaches for extracting constraints were proposed. For sufficiently large constraint sets, the GCC algorithm outperformed the MSC algorithm. ^ The last part considers the problem of providing a theme-specific environment for mining multi-source biomedical data. The database called PlasmoTFBM, focusing on gene regulation of Plasmodium falciparum, contains diverse information and has a simple interface to allow biologists to explore the data. It provided a framework for comparing different analytical tools for predicting regulatory elements and for designing useful data mining tools. ^ The conclusion is that the experiments reported in this dissertation strongly support the central hypothesis.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ground Delay Programs (GDP) are sometimes cancelled before their initial planned duration and for this reason aircraft are delayed when it is no longer needed. Recovering this delay usually leads to extra fuel consumption, since the aircraft will typically depart after having absorbed on ground their assigned delay and, therefore, they will need to cruise at more fuel consuming speeds. Past research has proposed speed reduction strategy aiming at splitting the GDP-assigned delay between ground and airborne delay, while using the same fuel as in nominal conditions. Being airborne earlier, an aircraft can speed up to nominal cruise speed and recover part of the GDP delay without incurring extra fuel consumption if the GDP is cancelled earlier than planned. In this paper, all GDP initiatives that occurred in San Francisco International Airport during 2006 are studied and characterised by a K-means algorithm into three different clusters. The centroids for these three clusters have been used to simulate three different GDPs at the airport by using a realistic set of inbound traffic and the Future Air Traffic Management Concepts Evaluation Tool (FACET). The amount of delay that can be recovered using this cruise speed reduction technique, as a function of the GDP cancellation time, has been computed and compared with the delay recovered with the current concept of operations. Simulations have been conducted in calm wind situation and without considering a radius of exemption. Results indicate that when aircraft depart early and fly at the slower speed they can recover additional delays, compared to current operations where all delays are absorbed prior to take-off, in the event the GDP cancels early. There is a variability of extra delay recovered, being more significant, in relative terms, for those GDPs with a relatively low amount of demand exceeding the airport capacity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce K-tree in an information retrieval context. It is an efficient approximation of the k-means clustering algorithm. Unlike k-means it forms a hierarchy of clusters. It has been extended to address issues with sparse representations. We compare performance and quality to CLUTO using document collections. The K-tree has a low time complexity that is suitable for large document collections. This tree structure allows for efficient disk based implementations where space requirements exceed that of main memory.