821 resultados para Grid-based clustering approach


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The behaviour of laterally loaded piles is considerably influenced by the uncertainties in soil properties. Hence probabilistic models for assessment of allowable lateral load are necessary. Cone penetration test (CPT) data are often used to determine soil strength parameters, whereby the allowable lateral load of the pile is computed. In the present study, the maximum lateral displacement and moment of the pile are obtained based on the coefficient of subgrade reaction approach, considering the nonlinear soil behaviour in undrained clay. The coefficient of subgrade reaction is related to the undrained shear strength of soil, which can be obtained from CPT data. The soil medium is modelled as a one-dimensional random field along the depth, and it is described by the standard deviation and scale of fluctuation of the undrained shear strength of soil. Inherent soil variability, measurement uncertainty and transformation uncertainty are taken into consideration. The statistics of maximum lateral deflection and moment are obtained using the first-order, second-moment technique. Hasofer-Lind reliability indices for component and system failure criteria, based on the allowable lateral displacement and moment capacity of the pile section, are evaluated. The geotechnical database from the Konaseema site in India is used as a case example. It is shown that the reliability-based design approach for pile foundations, considering the spatial variability of soil, permits a rational choice of allowable lateral loads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Determination of the environmental factors controlling earth surface processes and landform patterns is one of the central themes in physical geography. However, the identification of the main drivers of the geomorphological phenomena is often challenging. Novel spatial analysis and modelling methods could provide new insights into the process-environment relationships. The objective of this research was to map and quantitatively analyse the occurrence of cryogenic phenomena in subarctic Finland. More precisely, utilising a grid-based approach the distribution and abundance of periglacial landforms were modelled to identify important landscape scale environmental factors. The study was performed using a comprehensive empirical data set of periglacial landforms from an area of 600 km2 at a 25-ha resolution. The utilised statistical methods were generalized linear modelling (GLM) and hierarchical partitioning (HP). GLMs were used to produce distribution and abundance models and HP to reveal independently the most likely causal variables. The GLM models were assessed utilising statistical evaluation measures, prediction maps, field observations and the results of HP analyses. A total of 40 different landform types and subtypes were identified. Topographical, soil property and vegetation variables were the primary correlates for the occurrence and cover of active periglacial landforms on the landscape scale. In the model evaluation, most of the GLMs were shown to be robust although the explanation power, prediction ability as well as the selected explanatory variables varied between the models. The great potential of the combination of a spatial grid system, terrain data and novel statistical techniques to map the occurrence of periglacial landforms was demonstrated in this study. GLM proved to be a useful modelling framework for testing the shapes of the response functions and significances of the environmental variables and the HP method helped to make better deductions of the important factors of earth surface processes. Hence, the numerical approach presented in this study can be a useful addition to the current range of techniques available to researchers to map and monitor different geographical phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The members of cupin superfamily exhibit large variations in their sequences, functions, organization of domains, quaternary associations and the nature of bound metal ion, despite having a conserved beta-barrel structural scaffold. Here, an attempt has been made to understand structure-function relationships among the members of this diverse superfamily and identify the principles governing functional diversity. The cupin superfamily also contains proteins for which the structures are available through world-wide structural genomics initiatives but characterized as ``hypothetical''. We have explored the feasibility of obtaining clues to functions of such proteins by means of comparative analysis with cupins of known structure and function. Methodology/Principal Findings: A 3-D structure-based phylogenetic approach was undertaken. Interestingly, a dendrogram generated solely on the basis of structural dissimilarity measure at the level of domain folds was found to cluster functionally similar members. This clustering also reflects an independent evolution of the two domains in bicupins. Close examination of structural superposition of members across various functional clusters reveals structural variations in regions that not only form the active site pocket but are also involved in interaction with another domain in the same polypeptide or in the oligomer. Conclusions/Significance: Structure-based phylogeny of cupins can influence identification of functions of proteins of yet unknown function with cupin fold. This approach can be extended to other proteins with a common fold that show high evolutionary divergence. This approach is expected to have an influence on the function annotation in structural genomics initiatives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Support Vector Machines(SVMs) are hyperplane classifiers defined in a kernel induced feature space. The data size dependent training time complexity of SVMs usually prohibits its use in applications involving more than a few thousands of data points. In this paper we propose a novel kernel based incremental data clustering approach and its use for scaling Non-linear Support Vector Machines to handle large data sets. The clustering method introduced can find cluster abstractions of the training data in a kernel induced feature space. These cluster abstractions are then used for selective sampling based training of Support Vector Machines to reduce the training time without compromising the generalization performance. Experiments done with real world datasets show that this approach gives good generalization performance at reasonable computational expense.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lipocalins constitute a superfamily of extracellular proteins that are found in all three kingdoms of life. Although very divergent in their sequences and functions, they show remarkable similarity in 3-D structures. Lipocalins bind and transport small hydrophobic molecules. Earlier sequence-based phylogenetic studies of lipocalins highlighted that they have a long evolutionary history. However the molecular and structural basis of their functional diversity is not completely understood. The main objective of the present study is to understand functional diversity of the lipocalins using a structure-based phylogenetic approach. The present study with 39 protein domains from the lipocalin superfamily suggests that the clusters of lipocalins obtained by structure-based phylogeny correspond well with the functional diversity. The detailed analysis on each of the clusters and sub-clusters reveals that the 39 lipocalin domains cluster based on their mode of ligand binding though the clustering was performed on the basis of gross domain structure. The outliers in the phylogenetic tree are often from single member families. Also structure-based phylogenetic approach has provided pointers to assign putative function for the domains of unknown function in lipocalin family. The approach employed in the present study can be used in the future for the functional identification of new lipocalin proteins and may be extended to other protein families where members show poor sequence similarity but high structural similarity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a computational method for the coupled simulation of a compressible flow interacting with a thin-shell structure undergoing large deformations. An Eulerian finite volume formulation is adopted for the fluid and a Lagrangian formulation based on subdivision finite elements is adopted for the shell response. The coupling between the fluid and the solid response is achieved via a novel approach based on level sets. The basic approach furnishes a general algorithm for coupling Lagrangian shell solvers with Cartesian grid based Eulerian fluid solvers. The efficiency and robustness of the proposed approach is demonstrated with a airbag deployment simulation. It bears emphasis that in the proposed approach the solid and the fluid components as well as their coupled interaction are considered in full detail and modeled with an equivalent level of fidelity without any oversimplifying assumptions or bias towards a particular physical aspect of the problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A system is described that tracks moving objects in a video dataset so as to extract a representation of the objects' 3D trajectories. The system then finds hierarchical clusters of similar trajectories in the video dataset. Objects' motion trajectories are extracted via an EKF formulation that provides each object's 3D trajectory up to a constant factor. To increase accuracy when occlusions occur, multiple tracking hypotheses are followed. For trajectory-based clustering and retrieval, a modified version of edit distance, called longest common subsequence (LCSS) is employed. Similarities are computed between projections of trajectories on coordinate axes. Trajectories are grouped based, using an agglomerative clustering algorithm. To check the validity of the approach, experiments using real data were performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need for the ability to cluster unknown data to better understand its relationship to know data is prevalent throughout science. Besides a better understanding of the data itself or learning about a new unknown object, cluster analysis can help with processing data, data standardization, and outlier detection. Most clustering algorithms are based on known features or expectations, such as the popular partition based, hierarchical, density-based, grid based, and model based algorithms. The choice of algorithm depends on many factors, including the type of data and the reason for clustering, nearly all rely on some known properties of the data being analyzed. Recently, Li et al. proposed a new universal similarity metric, this metric needs no prior knowledge about the object. Their similarity metric is based on the Kolmogorov Complexity of objects, the objects minimal description. While the Kolmogorov Complexity of an object is not computable, in "Clustering by Compression," Cilibrasi and Vitanyi use common compression algorithms to approximate the universal similarity metric and cluster objects with high success. Unfortunately, clustering using compression does not trivially extend to higher dimensions. Here we outline a method to adapt their procedure to images. We test these techniques on images of letters of the alphabet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the support for legacy application, which is one of the important advantages of Grid computing, is presented. The ability to reuse existing codes/applications in combination with other Web/Internet technologies, such as Java, makes Grid computing a good choice for developers to wrap existing applications behind Intranet or the Internet. The approach developed can be used for migrating legacy applications into Grid Services, which speeds up the popularization of Grid technology. The approach is illustrated using a case study with detailed description of its implementation step by step. Globus Toolkit is utilized to develop the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to report the preliminary development of an automatic collision avoidance technique for unmanned marine craft based on standardised rules, COLREGs, defined by the International Maritime Organisation. It is noted that all marine surface vessels are required to adhere to COLREGs at all times in order to minimise or eliminate the risk of collisions. The approach presented is essentially a reactive path planning algorithm which provides feedback to the autopilot of an unmanned vessel or the human captain of a manned ship for steering the craft safely. The proposed strategy consists of waypoint guidance by line-of-sight coupled with a manual biasing scheme. This is applied to the dynamic model of an unmanned surface vehicle. A simple PID autopilot is incorporated to ensure that the vessel adheres to the generated seaway. It is shown through simulations that the resulting scheme is able to generate viable trajectories in the presence of both stationary and dynamic obstacles. Rules 8 and 14 of the COLREGs, which apply to the amount of manoeuvre and to a head-on scenario respectively are simulated. A comparison is also made with an offline or deliberative grid-based path planning algorithm which has been modified to generate COLREGs-compliant routes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resin bonded bridgework (RBB) is a technique often overlooked by practitioners despite a large amount of evidence supporting the technique. In Cork University Dental School an evidence-based, standardised approach for the delivery of RBB by undergraduate students has been developed over the past 10 years. The aim of this study was to evaluate the success of this standardised approach on the delivery of RBB by students. 222 bridges were reviewed which had been delivered over a 6 year time period between 2002 and 2007. A success rate of 84.1% was achieved with a mean survival time of 41 months. This study illustrates that predictable and highly successful RBB can be delivered by inexperienced clinicians using an evidence-based, standardised approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The large increase of distributed energy resources, including distributed generation, storage systems and demand response, especially in distribution networks, makes the management of the available resources a more complex and crucial process. With wind based generation gaining relevance, in terms of the generation mix, the fact that wind forecasting accuracy rapidly drops with the increase of the forecast anticipation time requires to undertake short-term and very short-term re-scheduling so the final implemented solution enables the lowest possible operation costs. This paper proposes a methodology for energy resource scheduling in smart grids, considering day ahead, hour ahead and five minutes ahead scheduling. The short-term scheduling, undertaken five minutes ahead, takes advantage of the high accuracy of the very-short term wind forecasting providing the user with more efficient scheduling solutions. The proposed method uses a Genetic Algorithm based approach for optimization that is able to cope with the hard execution time constraint of short-term scheduling. Realistic power system simulation, based on PSCAD , is used to validate the obtained solutions. The paper includes a case study with a 33 bus distribution network with high penetration of distributed energy resources implemented in PSCAD .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper consists in the characterization of medium voltage (MV) electric power consumers based on a data clustering approach. It is intended to identify typical load profiles by selecting the best partition of a power consumption database among a pool of data partitions produced by several clustering algorithms. The best partition is selected using several cluster validity indices. These methods are intended to be used in a smart grid environment to extract useful knowledge about customers’ behavior. The data-mining-based methodology presented throughout the paper consists in several steps, namely the pre-processing data phase, clustering algorithms application and the evaluation of the quality of the partitions. To validate our approach, a case study with a real database of 1.022 MV consumers was used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A spectral angle based feature extraction method, Spectral Clustering Independent Component Analysis (SC-ICA), is proposed in this work to improve the brain tissue classification from Magnetic Resonance Images (MRI). SC-ICA provides equal priority to global and local features; thereby it tries to resolve the inefficiency of conventional approaches in abnormal tissue extraction. First, input multispectral MRI is divided into different clusters by a spectral distance based clustering. Then, Independent Component Analysis (ICA) is applied on the clustered data, in conjunction with Support Vector Machines (SVM) for brain tissue analysis. Normal and abnormal datasets, consisting of real and synthetic T1-weighted, T2-weighted and proton density/fluid-attenuated inversion recovery images, were used to evaluate the performance of the new method. Comparative analysis with ICA based SVM and other conventional classifiers established the stability and efficiency of SC-ICA based classification, especially in reproduction of small abnormalities. Clinical abnormal case analysis demonstrated it through the highest Tanimoto Index/accuracy values, 0.75/98.8%, observed against ICA based SVM results, 0.17/96.1%, for reproduced lesions. Experimental results recommend the proposed method as a promising approach in clinical and pathological studies of brain diseases

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.