22 resultados para Constrained clustering


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: A recently introduced pragmatic scheme promises to be a useful catalog of interneuron names.We sought to automatically classify digitally reconstructed interneuronal morphologies according tothis scheme. Simultaneously, we sought to discover possible subtypes of these types that might emergeduring automatic classification (clustering). We also investigated which morphometric properties weremost relevant for this classification.Materials and methods: A set of 118 digitally reconstructed interneuronal morphologies classified into thecommon basket (CB), horse-tail (HT), large basket (LB), and Martinotti (MA) interneuron types by 42 of theworld?s leading neuroscientists, quantified by five simple morphometric properties of the axon and fourof the dendrites. We labeled each neuron with the type most commonly assigned to it by the experts. Wethen removed this class information for each type separately, and applied semi-supervised clustering tothose cells (keeping the others? cluster membership fixed), to assess separation from other types and lookfor the formation of new groups (subtypes). We performed this same experiment unlabeling the cells oftwo types at a time, and of half the cells of a single type at a time. The clustering model is a finite mixtureof Gaussians which we adapted for the estimation of local (per-cluster) feature relevance. We performedthe described experiments on three different subsets of the data, formed according to how many expertsagreed on type membership: at least 18 experts (the full data set), at least 21 (73 neurons), and at least26 (47 neurons).Results: Interneurons with more reliable type labels were classified more accurately. We classified HTcells with 100% accuracy, MA cells with 73% accuracy, and CB and LB cells with 56% and 58% accuracy,respectively. We identified three subtypes of the MA type, one subtype of CB and LB types each, andno subtypes of HT (it was a single, homogeneous type). We got maximum (adapted) Silhouette widthand ARI values of 1, 0.83, 0.79, and 0.42, when unlabeling the HT, CB, LB, and MA types, respectively,confirming the quality of the formed cluster solutions. The subtypes identified when unlabeling a singletype also emerged when unlabeling two types at a time, confirming their validity. Axonal morphometricproperties were more relevant that dendritic ones, with the axonal polar histogram length in the [pi, 2pi) angle interval being particularly useful.Conclusions: The applied semi-supervised clustering method can accurately discriminate among CB, HT, LB, and MA interneuron types while discovering potential subtypes, and is therefore useful for neuronal classification. The discovery of potential subtypes suggests that some of these types are more heteroge-neous that previously thought. Finally, axonal variables seem to be more relevant than dendritic ones fordistinguishing among the CB, HT, LB, and MA interneuron types.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in non-destructive imaging techniques, such as X-ray computed tomography (CT), make it possible to analyse pore space features from the direct visualisation from soil structures. A quantitative characterisation of the three-dimensional solid-pore architecture is important to understand soil mechanics, as they relate to the control of biological, chemical, and physical processes across scales. This analysis technique therefore offers an opportunity to better interpret soil strata, as new and relevant information can be obtained. In this work, we propose an approach to automatically identify the pore structure of a set of 200-2D images that represent slices of an original 3D CT image of a soil sample, which can be accomplished through non-linear enhancement of the pixel grey levels and an image segmentation based on a PFCM (Possibilistic Fuzzy C-Means) algorithm. Once the solids and pore spaces have been identified, the set of 200-2D images is then used to reconstruct an approximation of the soil sample by projecting only the pore spaces. This reconstruction shows the structure of the soil and its pores, which become more bounded, less bounded, or unbounded with changes in depth. If the soil sample image quality is sufficiently favourable in terms of contrast, noise and sharpness, the pore identification is less complicated, and the PFCM clustering algorithm can be used without additional processing; otherwise, images require pre-processing before using this algorithm. Promising results were obtained with four soil samples, the first of which was used to show the algorithm validity and the additional three were used to demonstrate the robustness of our proposal. The methodology we present here can better detect the solid soil and pore spaces on CT images, enabling the generation of better 2D?3D representations of pore structures from segmented 2D images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic 2D-to-3D conversion is an important application for filling the gap between the increasing number of 3D displays and the still scant 3D content. However, existing approaches have an excessive computational cost that complicates its practical application. In this paper, a fast automatic 2D-to-3D conversion technique is proposed, which uses a machine learning framework to infer the 3D structure of a query color image from a training database with color and depth images. Assuming that photometrically similar images have analogous 3D structures, a depth map is estimated by searching the most similar color images in the database, and fusing the corresponding depth maps. Large databases are desirable to achieve better results, but the computational cost also increases. A clustering-based hierarchical search using compact SURF descriptors to characterize images is proposed to drastically reduce search times. A significant computational time improvement has been obtained regarding other state-of-the-art approaches, maintaining the quality results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Finding the degree-constrained minimum spanning tree (DCMST) of a graph is a widely studied NP-hard problem. One of its most important applications is network design. Here we deal with a new variant of the DCMST problem, which consists of finding not only the degree- but also the role-constrained minimum spanning tree (DRCMST), i.e., we add constraints to restrict the role of the nodes in the tree to root, intermediate or leaf node. Furthermore, we do not limit the number of root nodes to one, thereby, generally, building a forest of DRCMSTs. The modeling of network design problems can benefit from the possibility of generating more than one tree and determining the role of the nodes in the network. We propose a novel permutation-based representation to encode these forests. In this new representation, one permutation simultaneously encodes all the trees to be built. We simulate a wide variety of DRCMST problems which we optimize using eight different evolutionary computation algorithms encoding individuals of the population using the proposed representation. The algorithms we use are: estimation of distribution algorithm, generational genetic algorithm, steady-state genetic algorithm, covariance matrix adaptation evolution strategy, differential evolution, elitist evolution strategy, non-elitist evolution strategy and particle swarm optimization. The best results are for the estimation of distribution algorithms and both types of genetic algorithms, although the genetic algorithms are significantly faster.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses an uplink power control dynamic game where we assume that each user battery represents the system state that changes with time following a discrete-time version of a differential game. To overcome the complexity of the analysis of a dynamic game approach we focus on the concept of Dynamic Potential Games showing that the game can be solved as an equivalent Multivariate Optimum Control Problem. The solution of this problem is quite interesting because different users split the activity in time, avoiding higher interferences and providing a long term fairness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Las redes del futuro, incluyendo las redes de próxima generación, tienen entre sus objetivos de diseño el control sobre el consumo de energía y la conectividad de la red. Estos objetivos cobran especial relevancia cuando hablamos de redes con capacidades limitadas, como es el caso de las redes de sensores inalámbricos (WSN por sus siglas en inglés). Estas redes se caracterizan por estar formadas por dispositivos de baja o muy baja capacidad de proceso y por depender de baterías para su alimentación. Por tanto la optimización de la energía consumida se hace muy importante. Son muchas las propuestas que se han realizado para optimizar el consumo de energía en este tipo de redes. Quizás las más conocidas son las que se basan en la planificación coordinada de periodos de actividad e inactividad, siendo una de las formas más eficaces para extender el tiempo de vida de las baterías. La propuesta que se presenta en este trabajo se basa en el control de la conectividad mediante una aproximación probabilística. La idea subyacente es que se puede esperar que una red mantenga la conectividad si todos sus nodos tienen al menos un número determinado de vecinos. Empleando algún mecanismo que mantenga ese número, se espera que se pueda mantener la conectividad con un consumo energético menor que si se empleara una potencia de transmisión fija que garantizara una conectividad similar. Para que el mecanismo sea eficiente debe tener la menor huella posible en los dispositivos donde se vaya a emplear. Por eso se propone el uso de un sistema auto-adaptativo basado en control mediante lógica borrosa. En este trabajo se ha diseñado e implementado el sistema descrito, y se ha probado en un despliegue real confirmando que efectivamente existen configuraciones posibles que permiten mantener la conectividad ahorrando energía con respecto al uso de una potencia de transmisión fija. ABSTRACT. Among the design goals for future networks, including next generation networks, we can find the energy consumption and the connectivity. These two goals are of special relevance when dealing with constrained networks. That is the case of Wireless Sensors Networks (WSN). These networks consist of devices with low or very low processing capabilities. They also depend on batteries for their operation. Thus energy optimization becomes a very important issue. Several proposals have been made for optimizing the energy consumption in this kind of networks. Perhaps the best known are those based on the coordinated planning of active and sleep intervals. They are indeed one of the most effective ways to extend the lifetime of the batteries. The proposal presented in this work uses a probabilistic approach to control the connectivity of a network. The underlying idea is that it is highly probable that the network will have a good connectivity if all the nodes have a minimum number of neighbors. By using some mechanism to reach that number, we hope that we can preserve the connectivity with a lower energy consumption compared to the required one if a fixed transmission power is used to achieve a similar connectivity. The mechanism must have the smallest footprint possible on the devices being used in order to be efficient. Therefore a fuzzy control based self-adaptive system is proposed. This work includes the design and implementation of the described system. It also has been validated in a real scenario deployment. We have obtained results supporting that there exist configurations where it is possible to get a good connectivity saving energy when compared to the use of a fixed transmission power for a similar connectivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On-line partial discharge (PD) measurements have become a common technique for assessing the insulation condition of installed high voltage (HV) insulated cables. When on-line tests are performed in noisy environments, or when more than one source of pulse-shaped signals are present in a cable system, it is difficult to perform accurate diagnoses. In these cases, an adequate selection of the non-conventional measuring technique and the implementation of effective signal processing tools are essential for a correct evaluation of the insulation degradation. Once a specific noise rejection filter is applied, many signals can be identified as potential PD pulses, therefore, a classification tool to discriminate the PD sources involved is required. This paper proposes an efficient method for the classification of PD signals and pulse-type noise interferences measured in power cables with HFCT sensors. By using a signal feature generation algorithm, representative parameters associated to the waveform of each pulse acquired are calculated so that they can be separated in different clusters. The efficiency of the clustering technique proposed is demonstrated through an example with three different PD sources and several pulse-shaped interferences measured simultaneously in a cable system with a high frequency current transformer (HFCT).