891 resultados para Fuzzy C-Means clustering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radial basis functions can be combined into a network structure that has several advantages over conventional neural network solutions. However, to operate effectively the number and positions of the basis function centres must be carefully selected. Although no rigorous algorithm exists for this purpose, several heuristic methods have been suggested. In this paper a new method is proposed in which radial basis function centres are selected by the mean-tracking clustering algorithm. The mean-tracking algorithm is compared with k means clustering and it is shown that it achieves significantly better results in terms of radial basis function performance. As well as being computationally simpler, the mean-tracking algorithm in general selects better centre positions, thus providing the radial basis functions with better modelling accuracy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Boreal winter wind storm situations over Central Europe are investigated by means of an objective cluster analysis. Surface data from the NCEP-Reanalysis and ECHAM4/OPYC3-climate change GHG simulation (IS92a) are considered. To achieve an optimum separation of clusters of extreme storm conditions, 55 clusters of weather patterns are differentiated. To reduce the computational effort, a PCA is initially performed, leading to a data reduction of about 98 %. The clustering itself was computed on 3-day periods constructed with the first six PCs using "k-means" clustering algorithm. The applied method enables an evaluation of the time evolution of the synoptic developments. The climate change signal is constructed by a projection of the GCM simulation on the EOFs attained from the NCEP-Reanalysis. Consequently, the same clusters are obtained and frequency distributions can be compared. For Central Europe, four primary storm clusters are identified. These clusters feature almost 72 % of the historical extreme storms events and add only to 5 % of the total relative frequency. Moreover, they show a statistically significant signature in the associated wind fields over Europe. An increased frequency of Central European storm clusters is detected with enhanced GHG conditions, associated with an enhancement of the pressure gradient over Central Europe. Consequently, more intense wind events over Central Europe are expected. The presented algorithm will be highly valuable for the analysis of huge data amounts as is required for e.g. multi-model ensemble analysis, particularly because of the enormous data reduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extratropical transition (ET) has eluded objective identification since the realisation of its existence in the 1970s. Recent advances in numerical, computational models have provided data of higher resolution than previously available. In conjunction with this, an objective characterisation of the structure of a storm has now become widely accepted in the literature. Here we present a method of combining these two advances to provide an objective method for defining ET. The approach involves applying K-means clustering to isolate different life-cycle stages of cyclones and then analysing the progression through these stages. This methodology is then tested by applying it to five recent years from the European Centre of Medium-Range Weather Forecasting operational analyses. It is found that this method is able to determine the general characteristics for ET in the Northern Hemisphere. Between 2008 and 2012, 54% (±7, 32 of 59) of Northern Hemisphere tropical storms are estimated to undergo ET. There is great variability across basins and time of year. To fully capture all the instances of ET is necessary to introduce and characterise multiple pathways through transition. Only one of the three transition types needed has been previously well-studied. A brief description of the alternate types of transitions is given, along with illustrative storms, to assist with further study

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solar-powered vehicle activated signs (VAS) are speed warning signs powered by batteries that are recharged by solar panels. These signs are more desirable than other active warning signs due to the low cost of installation and the minimal maintenance requirements. However, one problem that can affect a solar-powered VAS is the limited power capacity available to keep the sign operational. In order to be able to operate the sign more efficiently, it is proposed that the sign be appropriately triggered by taking into account the prevalent conditions. Triggering the sign depends on many factors such as the prevailing speed limit, road geometry, traffic behaviour, the weather and the number of hours of daylight. The main goal of this paper is therefore to develop an intelligent algorithm that would help optimize the trigger point to achieve the best compromise between speed reduction and power consumption. Data have been systematically collected whereby vehicle speed data were gathered whilst varying the value of the trigger speed threshold. A two stage algorithm is then utilized to extract the trigger speed value. Initially the algorithm employs a Self-Organising Map (SOM), to effectively visualize and explore the properties of the data that is then clustered in the second stage using K-means clustering method. Preliminary results achieved in the study indicate that using a SOM in conjunction with K-means method is found to perform well as opposed to direct clustering of the data by K-means alone. Using a SOM in the current case helped the algorithm determine the number of clusters in the data set, which is a frequent problem in data clustering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To characterize the PI component of long latency auditory evoked potentials (LLAEPs) in cochlear implant users with auditory neuropathy spectrum disorder (ANSD) and determine firstly whether they correlate with speech perception performance and secondly whether they correlate with other variables related to cochlear implant use. Methods: This study was conducted at the Center for Audiological Research at the University of Sao Paulo. The sample included 14 pediatric (4-11 years of age) cochlear implant users with ANSD, of both sexes, with profound prelingual hearing loss. Patients with hypoplasia or agenesis of the auditory nerve were excluded from the study. LLAEPs produced in response to speech stimuli were recorded using a Smart EP USB Jr. system. The subjects' speech perception was evaluated using tests 5 and 6 of the Glendonald Auditory Screening Procedure (GASP). Results: The P-1 component was detected in 12/14 (85.7%) children with ANSD. Latency of the P-1 component correlated with duration of sensorial hearing deprivation (*p = 0.007, r = 0.7278), but not with duration of cochlear implant use. An analysis of groups assigned according to GASP performance (k-means clustering) revealed that aspects of prior central auditory system development reflected in the P-1 component are related to behavioral auditory skills. Conclusions: In children with ANSD using cochlear implants, the P-1 component can serve as a marker of central auditory cortical development and a predictor of the implanted child's speech perception performance. (c) 2012 Elsevier Ireland Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The clustering problem consists in finding patterns in a data set in order to divide it into clusters with high within-cluster similarity. This paper presents the study of a problem, here called MMD problem, which aims at finding a clustering with a predefined number of clusters that minimizes the largest within-cluster distance (diameter) among all clusters. There are two main objectives in this paper: to propose heuristics for the MMD and to evaluate the suitability of the best proposed heuristic results according to the real classification of some data sets. Regarding the first objective, the results obtained in the experiments indicate a good performance of the best proposed heuristic that outperformed the Complete Linkage algorithm (the most used method from the literature for this problem). Nevertheless, regarding the suitability of the results according to the real classification of the data sets, the proposed heuristic achieved better quality results than C-Means algorithm, but worse than Complete Linkage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In young, first-episode, productive, medication-naive patients with schizophrenia, EEG microstates (building blocks of mentation) tend to be shortened. Koenig et al. [Koenig, T., Lehmann, D., Merlo, M., Kochi, K., Hell, D., Koukkou, M., 1999. A deviant EEG brain microstate in acute, neuroleptic-naïve schizophrenics at rest. European Archives of Psychiatry and Clinical Neuroscience 249, 205–211] suggested that shortening concerned specific microstate classes. Sequence rules (microstate concatenations, syntax) conceivably might also be affected. In 27 patients of the above type and 27 controls, from three centers, multichannel resting EEG was analyzed into microstates using k-means clustering of momentary potential topographies into four microstate classes (A–D). In patients, microstates were shortened in classes B and D (from 80 to 70 ms and from 94 to 82 ms, respectively), occurred more frequently in classes A and C, and covered more time in A and less in B. Topography differed only in class B where LORETA tomography predominantly showed stronger left and anterior activity in patients. Microstate concatenation (syntax) generally were disturbed in patients; specifically, the class sequence A→C→D→A predominated in controls, but was reversed in patients (A→D→C→A). In schizophrenia, information processing in certain classes of mental operations might deviate because of precocious termination. The intermittent occurrence might account for Bleuler's “double bookkeeping.” The disturbed microstate syntax opens a novel physiological comparison of mental operations between patients and controls.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have performed quantitative X-ray diffraction (qXRD) analysis of 157 grab or core-top samples from the western Nordic Seas between (WNS) ~57°-75°N and 5° to 45° W. The RockJock Vs6 analysis includes non-clay (20) and clay (10) mineral species in the <2 mm size fraction that sum to 100 weight %. The data matrix was reduced to 9 and 6 variables respectively by excluding minerals with low weight% and by grouping into larger groups, such as the alkali and plagioclase feldspars. Because of its potential dual origins calcite was placed outside of the sum. We initially hypothesized that a combination of regional bedrock outcrops and transport associated with drift-ice, meltwater plumes, and bottom currents would result in 6 clusters defined by "similar" mineral compositions. The hypothesis was tested by use of a fuzzy k-mean clustering algorithm and key minerals were identified by step-wise Discriminant Function Analysis. Key minerals in defining the clusters include quartz, pyroxene, muscovite, and amphibole. With 5 clusters, 87.5% of the observations are correctly classified. The geographic distributions of the five k-mean clusters compares reasonably well with the original hypothesis. The close spatial relationship between bedrock geology and discrete cluster membership stresses the importance of this variable at both the WNS-scale and at a more local scale in NE Greenland.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coarse-resolution thematic maps derived from remotely sensed data and implemented in GIS play an important role in coastal and marine conservation, research and management. Here, we describe an approach for fine-resolution mapping of land-cover types using aerial photography and ancillary GIs and ground data in a large (100 x 35 km) subtropical estuarine system (Moreton Bay, Queensland, Australia). We have developed and implemented a classification scheme representing 24 coastal (subtidal, intertidal. mangrove, supratidal and terrestrial) cover types relevant to the ecology of estuarine animals, nekton and shorebirds. The accuracy of classifications of the intertidal and subtidal cover types, as indicated by the agreement between the mapped (predicted) and reference (ground) data, was 77-88%, depending on the zone and level of generalization required. The variability and spatial distribution of habitat mosaics (landscape types) across the mapped environment were assessed using K-means clustering and validated with Classification and Regression Tree models. Seven broad landscape types could be distinguished and ways of incorporating the information on landscape composition into site-specific conservation and field research are discussed. This research illustrates the importance and potential applications of fine-resolution mapping for conservation and management of estuarine habitats and their terrestrial and aquatic wildlife. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present an efficient k-Means clustering algorithm for two dimensional data. The proposed algorithm re-organizes dataset into a form of nested binary tree*. Data items are compared at each node with only two nearest means with respect to each dimension and assigned to the one that has the closer mean. The main intuition of our research is as follows: We build the nested binary tree. Then we scan the data in raster order by in-order traversal of the tree. Lastly we compare data item at each node to the only two nearest means to assign the value to the intendant cluster. In this way we are able to save the computational cost significantly by reducing the number of comparisons with means and also by the least use to Euclidian distance formula. Our results showed that our method can perform clustering operation much faster than the classical ones. © Springer-Verlag Berlin Heidelberg 2005

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the impact of atmospheric black carbon (BC) containing particles on human health and radiative forcing requires knowledge of the mixing state of BC, including the characteristics of the materials with which it is internally mixed. In this study, we demonstrate for the first time the capabilities of the Aerodyne Soot-Particle Aerosol Mass Spectrometer equipped with a light scattering module (LS-SP-AMS) to examine the mixing state of refractory BC (rBC) and other aerosol components in an urban environment (downtown Toronto). K-means clustering analysis was used to classify single particle mass spectra into chemically distinct groups. One resultant cluster is dominated by rBC mass spectral signals (C+1 to C+5) while the organic signals fall into a few major clusters, identified as hydrocarbon-like organic aerosol (HOA), oxygenated organic aerosol (OOA), and cooking emission organic aerosol (COA). A nearly external mixing is observed with small BC particles only thinly coated by HOA ( 28% by mass on average), while over 90% of the HOA-rich particles did not contain detectable amounts of rBC. Most of the particles classified into other inorganic and organic clusters were not significantly associated with BC. The single particle results also suggest that HOA and COA emitted from anthropogenic sources were likely major contributors to organic-rich particles with low to mid-range aerodynamic diameter (dva). The similar temporal profiles and mass spectral features of the organic clusters and the factors from a positive matrix factorization (PMF) analysis of the ensemble aerosol dataset validate the conventional interpretation of the PMF results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'esperimento ATLAS, come gli altri esperimenti che operano al Large Hadron Collider, produce Petabytes di dati ogni anno, che devono poi essere archiviati ed elaborati. Inoltre gli esperimenti si sono proposti di rendere accessibili questi dati in tutto il mondo. In risposta a questi bisogni è stato progettato il Worldwide LHC Computing Grid che combina la potenza di calcolo e le capacità di archiviazione di più di 170 siti sparsi in tutto il mondo. Nella maggior parte dei siti del WLCG sono state sviluppate tecnologie per la gestione dello storage, che si occupano anche della gestione delle richieste da parte degli utenti e del trasferimento dei dati. Questi sistemi registrano le proprie attività in logfiles, ricchi di informazioni utili agli operatori per individuare un problema in caso di malfunzionamento del sistema. In previsione di un maggiore flusso di dati nei prossimi anni si sta lavorando per rendere questi siti ancora più affidabili e uno dei possibili modi per farlo è lo sviluppo di un sistema in grado di analizzare i file di log autonomamente e individuare le anomalie che preannunciano un malfunzionamento. Per arrivare a realizzare questo sistema si deve prima individuare il metodo più adatto per l'analisi dei file di log. In questa tesi viene studiato un approccio al problema che utilizza l'intelligenza artificiale per analizzare i logfiles, più nello specifico viene studiato l'approccio che utilizza dell'algoritmo di clustering K-means.