756 resultados para Grid-based clustering approach
Resumo:
Many previous studies have shown that unforced climate model simulations exhibit decadal-scale fluctuations in the Atlantic meridional overturning circulation (AMOC), and that this variability can have impacts on surface climate fields. However, the robustness of these surface fingerprints across different models is less clear. Furthermore, with the potential for coupled feedbacks that may amplify or damp the response, it is not known whether the associated climate signals are linearly related to the strength of the AMOC changes, or if the fluctuation events exhibit nonlinear behaviour with respect to their strength or polarity. To explore these questions, we introduce an objective and flexible method for identifying the largest natural AMOC fluctuation events in multicentennial/multimillennial simulations of a variety of coupled climate models. The characteristics of the events are explored, including their magnitude, meridional coherence and spatial structure, as well as links with ocean heat transport and the horizontal circulation. The surface fingerprints in ocean temperature and salinity are examined, and compared with the results of linear regression analysis. It is found that the regressions generally provide a good indication of the surface changes associated with the largest AMOC events. However, there are some exceptions, including a nonlinear change in the atmospheric pressure signal, particularly at high latitudes, in HadCM3. Some asymmetries are also found between the changes associated with positive and negative AMOC events in the same model. Composite analysis suggests that there are signals that are robust across the largest AMOC events in each model, which provides reassurance that the surface changes associated with one particular event will be similar to those expected from regression analysis. However, large differences are found between the AMOC fingerprints in different models, which may hinder the prediction and attribution of such events in reality.
Resumo:
Smart grid research has tended to be compartmentalised, with notable contributions from economics, electrical engineering and science and technology studies. However, there is an acknowledged and growing need for an integrated systems approach to the evaluation of smart grid initiatives. The capacity to simulate and explore smart grid possibilities on various scales is key to such an integrated approach but existing models – even if multidisciplinary – tend to have a limited focus. This paper describes an innovative and flexible framework that has been developed to facilitate the simulation of various smart grid scenarios and the interconnected social, technical and economic networks from a complex systems perspective. The architecture is described and related to realised examples of its use, both to model the electricity system as it is today and to model futures that have been envisioned in the literature. Potential future applications of the framework are explored, along with its utility as an analytic and decision support tool for smart grid stakeholders.
Resumo:
Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.
Resumo:
Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.
Resumo:
Acestrorhynchus is the sole genus of the family Acestrorhynchidae which includes 14 species currently recognized as valid. Species of Acestrorhynchus comprise small-to-medium sized piscivorous fishes and have been traditionally grouped on the basis of well-defined color patterns. A recent phylogeny, based on morphological characters, could not resolve the phylogenetic affinities of A. heterolepis and the relationships among the species of the clade formed by A. abbreviatus, A. altus, A. falcatus, A. lacustris, and A. pantaneiro. The simultaneous analysis of two mitochondrial genes (16S and ATP synthase subunits 6 and 8) and one nuclear intron (S7) was able to resolve the latter clade, but the position of A. heterolepis remained unresolved. The combination of the molecular and morphological data sets in a total evidence analysis resulted in a well-resolved hypothesis regarding the phylogenetic relationships of Acestrorhynchus species. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Clustering quality or validation indices allow the evaluation of the quality of clustering in order to support the selection of a specific partition or clustering structure in its natural unsupervised environment, where the real solution is unknown or not available. In this paper, we investigate the use of quality indices mostly based on the concepts of clusters` compactness and separation, for the evaluation of clustering results (partitions in particular). This work intends to offer a general perspective regarding the appropriate use of quality indices for the purpose of clustering evaluation. After presenting some commonly used indices, as well as indices recently proposed in the literature, key issues regarding the practical use of quality indices are addressed. A general methodological approach is presented which considers the identification of appropriate indices thresholds. This general approach is compared with the simple use of quality indices for evaluating a clustering solution.
Resumo:
Case-Based Reasoning is a methodology for problem solving based on past experiences. This methodology tries to solve a new problem by retrieving and adapting previously known solutions of similar problems. However, retrieved solutions, in general, require adaptations in order to be applied to new contexts. One of the major challenges in Case-Based Reasoning is the development of an efficient methodology for case adaptation. The most widely used form of adaptation employs hand coded adaptation rules, which demands a significant knowledge acquisition and engineering effort. An alternative to overcome the difficulties associated with the acquisition of knowledge for case adaptation has been the use of hybrid approaches and automatic learning algorithms for the acquisition of the knowledge used for the adaptation. We investigate the use of hybrid approaches for case adaptation employing Machine Learning algorithms. The approaches investigated how to automatically learn adaptation knowledge from a case base and apply it to adapt retrieved solutions. In order to verify the potential of the proposed approaches, they are experimentally compared with individual Machine Learning techniques. The results obtained indicate the potential of these approaches as an efficient approach for acquiring case adaptation knowledge. They show that the combination of Instance-Based Learning and Inductive Learning paradigms and the use of a data set of adaptation patterns yield adaptations of the retrieved solutions with high predictive accuracy.
Resumo:
Aspect-oriented programming (AOP) is a promising technology that supports separation of crosscutting concerns (i.e., functionality that tends to be tangled with, and scattered through the rest of the system). In AOP, a method-like construct named advice is applied to join points in the system through a special construct named pointcut. This mechanism supports the modularization of crosscutting behavior; however, since the added interactions are not explicit in the source code, it is hard to ensure their correctness. To tackle this problem, this paper presents a rigorous coverage analysis approach to ensure exercising the logic of each advice - statements, branches, and def-use pairs - at each affected join point. To make this analysis possible, a structural model based on Java bytecode - called PointCut-based Del-Use Graph (PCDU) - is proposed, along with three integration testing criteria. Theoretical, empirical, and exploratory studies involving 12 aspect-oriented programs and several fault examples present evidence of the feasibility and effectiveness of the proposed approach. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Component-based software engineering has recently emerged as a promising solution to the development of system-level software. Unfortunately, current approaches are limited to specific platforms and domains. This lack of generality is particularly problematic as it prevents knowledge sharing and generally drives development costs up. In the past, we have developed a generic approach to component-based software engineering for system-level software called OpenCom. In this paper, we present OpenComL an instantiation of OpenCom to Linux environments and show how it can be profiled to meet a range of system-level software in Linux environments. For this, we demonstrate its application to constructing a programmable router platform and a middleware for parallel environments.
Resumo:
Over the useful life of a LAN, network downtimes will have a negative impact on organizational productivity not included in current Network Topological Design (NTD) problems. We propose a new approach to LAN topological design that includes the impact of these productivity losses into the network design, minimizing not only the CAPEX but also the expected cost of unproductiveness attributable to network downtimes over a certain period of network operation.
Resumo:
This paper presents an automatic method to detect and classify weathered aggregates by assessing changes of colors and textures. The method allows the extraction of aggregate features from images and the automatic classification of them based on surface characteristics. The concept of entropy is used to extract features from digital images. An analysis of the use of this concept is presented and two classification approaches, based on neural networks architectures, are proposed. The classification performance of the proposed approaches is compared to the results obtained by other algorithms (commonly considered for classification purposes). The obtained results confirm that the presented method strongly supports the detection of weathered aggregates.
Resumo:
This paper introduces a novel methodology to shape boundary characterization, where a shape is modeled into a small-world complex network. It uses degree and joint degree measurements in a dynamic evolution network to compose a set of shape descriptors. The proposed shape characterization method has all efficient power of shape characterization, it is robust, noise tolerant, scale invariant and rotation invariant. A leaf plant classification experiment is presented on three image databases in order to evaluate the method and compare it with other descriptors in the literature (Fourier descriptors, Curvature, Zernike moments and multiscale fractal dimension). (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
An entropy-based image segmentation approach is introduced and applied to color images obtained from Google Earth. Segmentation refers to the process of partitioning a digital image in order to locate different objects and regions of interest. The application to satellite images paves the way to automated monitoring of ecological catastrophes, urban growth, agricultural activity, maritime pollution, climate changing and general surveillance. Regions representing aquatic, rural and urban areas are identified and the accuracy of the proposed segmentation methodology is evaluated. The comparison with gray level images revealed that the color information is fundamental to obtain an accurate segmentation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The authors take a broad view that ultimately Grid- or Web-services must be located via personalised, semantic-rich discovery processes. They argue that such processes must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. Examples of such metadata are reliability metrics, quality of service data, or semantic service description markup. This paper presents UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. They also discuss the use of a rich, graph-based RDF query language for syntactic queries on this data. Finally, they analyse the performance of each of these contributions in our implementation.
Resumo:
We take a broad view that ultimately Grid- or Web-services must be located via personalised, semantic-rich discovery processes. We argue that such processes must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. Examples of such metadata are reliability metrics, quality of service data, or semantic service description markup. This paper presents UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. We also discuss the use of a rich, graph-based RDF query language for syntactic queries on this data. Finally, we analyse the performance of each of these contributions in our implementation.