39 resultados para Incremental Clustering


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The spatial distribution patterns of the diffuse, primitive, and classic beta-amyloid (Abeta) deposits were studied in areas of the medial temporal lobe in 12 cases of Down's Syndrome (DS) 35 to 67 years of age. Large clusters of diffuse deposits were present in the youngest patients; cluster size then declined with patient age but increased again in the oldest patients. By contrast, the cluster sizes of the primitive and classic deposits increased with age to a maximum in patients 45 to 55 and 60 years of age respectively and declined in size in the oldest patients. In the parahippocampal gyrus (PHG), the clusters of the primitive deposits were most highly clustered in cases of intermediate age. The data suggest a developmental sequence in DS in which Abeta is deposited initially in the form of large clusters of diffuse deposits that are then gradually replaced by clusters of primitive and classic deposits. The oldest patients were an exception to this sequence in that the pattern of clustering resembled that of the youngest patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clustering of cellular neurofibrillary tangles (NFT) was studied in the cerebral cortex and hippocampus in cases of Alzheimer’s disease (AD) using a regression method. The objective of the study was to test the hypothesis that clustering of NFTs reflects the degeneration of the cortico-cortical pathways. In 25/38 (66%) of analyses of individual brain areas, a significant peak to trough and peak to peak distance was obtained suggesting that the clusters of NFTs were regularly distributed in bands parallel to the tissue boundary. In analyses of cortical tissues with regularly distributed clusters, peak to peak distance was between 1000 and 1600 microns in 13/24 (54%) of analyses, >1600 microns in 10/24 (42%) and <1000 microns in 1/24 (4%) of analyses. A regular distribution of NFT clusters was less evident in the CA sectors of the hippocampus than in the cortex. Hence, in a significant proportion of brain areas, the spacing of NFT clusters along the cerebral cortex was consistent with the predicted distribution of the cells of origin of specific cortico-cortical projections. However, in many brain regions, the sizes of the NFT clusters were larger than predicted which may be attributable to the spread of NFTs to adjacent groups of cells as the disease progresses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the sensitivity of a Markov model with states and transition probabilities obtained from clustering a molecular dynamics trajectory. We have examined a 500 ns molecular dynamics trajectory of the peptide valine-proline-alanine-leucine in explicit water. The sensitivity is quantified by varying the boundaries of the clusters and investigating the resulting variation in transition probabilities and the average transition time between states. In this way, we represent the effect of clustering using different clustering algorithms. It is found that in terms of the investigated quantities, the peptide dynamics described by the Markov model is sensitive to the clustering; in particular, the average transition times are found to vary up to 46%. Moreover, inclusion of nonphysical sparsely populated clusters can lead to serious errors of up to 814%. In the investigation, the time step used in the transition matrix is determined by the minimum time scale on which the system behaves approximately Markovian. This time step is found to be about 100 ps. It is concluded that the description of peptide dynamics with transition matrices should be performed with care, and that using standard clustering algorithms to obtain states and transition probabilities may not always produce reliable results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest to secure the much vaunted benefits of North Sea oil, highly non-incremental technologies have been adopted. Nowhere is this more the case than with the early fields of the central and northern North Sea. By focusing on the inflexible nature of North Sea hardware, in such fields, this thesis examines the problems that this sort of technology might pose for policy making. More particularly, the following issues are raised. First, the implications of non-incremental technical change for the successful conduct of oil policy is raised. Here, the focus is on the micro-economic performance of the first generation of North Sea oil fields and the manner in which this relates to government policy. Secondly, the question is posed as to whether there were more flexible, perhaps more incremental policy alternatives open to the decision makers. Conclusions drawn relate to the degree to which non-incremental shifts in policy permit decision makers to achieve their objectives at relatively low cost. To discover cases where non-incremental policy making has led to success in this way, would be to falsify the thesis that decision makers are best served by employing incremental politics as an approach to complex problem solving.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background - Delivery of high-quality, evidence-based health care to deprived sectors of the community is a major goal for society. We investigated the effectiveness of a culturally sensitive, enhanced care package in UK general practices for improvement of cardiovascular risk factors in patients of south Asian origin with type 2 diabetes. Methods - In this cluster randomised controlled trial, 21 inner-city practices in the UK were assigned by simple randomisation to intervention (enhanced care including additional time with practice nurse and support from a link worker and diabetes-specialist nurse [nine practices; n=868]) or control (standard care [12 practices; n=618]) groups. All adult patients of south Asian origin with type 2 diabetes were eligible. Prescribing algorithms with clearly defined targets were provided for all practices. Primary outcomes were changes in blood pressure, total cholesterol, and glycaemic control (haemoglobin A1c) after 2 years. Analysis was by intention to treat. This trial is registered, number ISRCTN 38297969. Findings - We recorded significant differences between treatment groups in diastolic blood pressure (1·91 [95% CI -2·88 to -0·94] mm?Hg, p=0·0001) and mean arterial pressure (1·36 [-2·49 to -0·23] mm?Hg, p=0·0180), after adjustment for confounders and clustering. We noted no significant differences between groups for total cholesterol (0·03 [-0·04 to 0·11] mmol/L), systolic blood pressure (-0·33 [-2·41 to 1·75] mm?Hg), or HbA1c (-0·15% [-0·33 to 0·03]). Economic analysis suggests that the nurse-led intervention was not cost effective (incremental cost-effectiveness ratio £28?933 per QALY gained). Across the whole study population over the 2 years of the trial, systolic blood pressure, diastolic blood pressure, and cholesterol decreased significantly by 4·9 (95% CI 4·0–5·9) mm?Hg, 3·8 (3·2–4·4) mm?Hg, and 0·45 (0·40–0·51) mmol/L, respectively, and we recorded a small and non-significant increase for haemoglobin A1c (0·04% [-0·04 to 0·13]), p=0·290). Interpretation - We recorded additional, although small, benefits from our culturally tailored care package that were greater than the secular changes achieved in the UK in recent years. Stricter targets in general practice and further measures to motivate patients are needed to achieve best possible health-care outcomes in south Asian patients with diabetes. Funding - Pfizer, Sanofi-Aventis, Servier Laboratories UK, Merck Sharp & Dohme/Schering-Plough, Takeda UK, Roche, Merck Pharma, Daiichi-Sankyo UK, Boehringer Ingelheim, Eli Lilly, Novo Nordisk, Bristol-Myers Squibb, Solvay Health Care, and Assurance Medical Society UK.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magnetoencephalography (MEG), a non-invasive technique for characterizing brain electrical activity, is gaining popularity as a tool for assessing group-level differences between experimental conditions. One method for assessing task-condition effects involves beamforming, where a weighted sum of field measurements is used to tune activity on a voxel-by-voxel basis. However, this method has been shown to produce inhomogeneous smoothness differences as a function of signal-to-noise across a volumetric image, which can then produce false positives at the group level. Here we describe a novel method for group-level analysis with MEG beamformer images that utilizes the peak locations within each participant's volumetric image to assess group-level effects. We compared our peak-clustering algorithm with SnPM using simulated data. We found that our method was immune to artefactual group effects that can arise as a result of inhomogeneous smoothness differences across a volumetric image. We also used our peak-clustering algorithm on experimental data and found that regions were identified that corresponded with task-related regions identified in the literature. These findings suggest that our technique is a robust method for group-level analysis with MEG beamformer images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web document cluster analysis plays an important role in information retrieval by organizing large amounts of documents into a small number of meaningful clusters. Traditional web document clustering is based on the Vector Space Model (VSM), which takes into account only two-level (document and term) knowledge granularity but ignores the bridging paragraph granularity. However, this two-level granularity may lead to unsatisfactory clustering results with “false correlation”. In order to deal with the problem, a Hierarchical Representation Model with Multi-granularity (HRMM), which consists of five-layer representation of data and a twophase clustering process is proposed based on granular computing and article structure theory. To deal with the zero-valued similarity problemresulted from the sparse term-paragraphmatrix, an ontology based strategy and a tolerance-rough-set based strategy are introduced into HRMM. By using granular computing, structural knowledge hidden in documents can be more efficiently and effectively captured in HRMM and thus web document clusters with higher quality can be generated. Extensive experiments show that HRMM, HRMM with tolerancerough-set strategy, and HRMM with ontology all outperform VSM and a representative non VSM-based algorithm, WFP, significantly in terms of the F-Score.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Emerging vehicular comfort applications pose a host of completely new set of requirements such as maintaining end-to-end connectivity, packet routing, and reliable communication for internet access while on the move. One of the biggest challenges is to provide good quality of service (QoS) such as low packet delay while coping with the fast topological changes. In this paper, we propose a clustering algorithm based on minimal path loss ratio (MPLR) which should help in spectrum efficiency and reduce data congestion in the network. The vehicular nodes which experience minimal path loss are selected as the cluster heads. The performance of the MPLR clustering algorithm is calculated by rate of change of cluster heads, average number of clusters and average cluster size. Vehicular traffic models derived from the Traffic Wales data are fed as input to the motorway simulator. A mathematical analysis for the rate of change of cluster head is derived which validates the MPLR algorithm and is compared with the simulated results. The mathematical and simulated results are in good agreement indicating the stability of the algorithm and the accuracy of the simulator. The MPLR system is also compared with V2R system with MPLR system performing better. © 2013 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper clarifies the role of alternative optimal solutions in the clustering of multidimensional observations using data envelopment analysis (DEA). The paper shows that alternative optimal solutions corresponding to several units produce different groups with different sizes and different decision making units (DMUs) at each class. This implies that a specific DMU may be grouped into different clusters when the corresponding DEA model has multiple optimal solutions. © 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Multiple Pheromone Ant Clustering Algorithm (MPACA) models the collective behaviour of ants to find clusters in data and to assign objects to the most appropriate class. It is an ant colony optimisation approach that uses pheromones to mark paths linking objects that are similar and potentially members of the same cluster or class. Its novelty is in the way it uses separate pheromones for each descriptive attribute of the object rather than a single pheromone representing the whole object. Ants that encounter other ants frequently enough can combine the attribute values they are detecting, which enables the MPACA to learn influential variable interactions. This paper applies the model to real-world data from two domains. One is logistics, focusing on resource allocation rather than the more traditional vehicle-routing problem. The other is mental-health risk assessment. The task for the MPACA in each domain was to predict class membership where the classes for the logistics domain were the levels of demand on haulage company resources and the mental-health classes were levels of suicide risk. Results on these noisy real-world data were promising, demonstrating the ability of the MPACA to find patterns in the data with accuracy comparable to more traditional linear regression models. © 2013 Polish Information Processing Society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ant colony optimisation algorithms model the way ants use pheromones for marking paths to important locations in their environment. Pheromone traces are picked up, followed, and reinforced by other ants but also evaporate over time. Optimal paths attract more pheromone and less useful paths fade away. The main innovation of the proposed Multiple Pheromone Ant Clustering Algorithm (MPACA) is to mark objects using many pheromones, one for each value of each attribute describing the objects in multidimensional space. Every object has one or more ants assigned to each attribute value and the ants then try to find other objects with matching values, depositing pheromone traces that link them. Encounters between ants are used to determine when ants should combine their features to look for conjunctions and whether they should belong to the same colony. This paper explains the algorithm and explores its potential effectiveness for cluster analysis. © 2014 Springer International Publishing Switzerland.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biological experiments often produce enormous amount of data, which are usually analyzed by data clustering. Cluster analysis refers to statistical methods that are used to assign data with similar properties into several smaller, more meaningful groups. Two commonly used clustering techniques are introduced in the following section: principal component analysis (PCA) and hierarchical clustering. PCA calculates the variance between variables and groups them into a few uncorrelated groups or principal components (PCs) that are orthogonal to each other. Hierarchical clustering is carried out by separating data into many clusters and merging similar clusters together. Here, we use an example of human leukocyte antigen (HLA) supertype classification to demonstrate the usage of the two methods. Two programs, Generating Optimal Linear Partial Least Square Estimations (GOLPE) and Sybyl, are used for PCA and hierarchical clustering, respectively. However, the reader should bear in mind that the methods have been incorporated into other software as well, such as SIMCA, statistiXL, and R.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim: To use previously validated image analysis techniques to determine the incremental nature of printed subjective anterior eye grading scales. Methods: A purpose designed computer program was written to detect edges using a 3 × 3 kernal and to extract colour planes in the selected area of an image. Annunziato and Efron pictorial, and CCLRU and Vistakon-Synoptik photographic grades of bulbar hyperaemia, palpebral hyperaemia roughness, and corneal staining were analysed. Results: The increments of the grading scales were best described by a quadratic rather than a linear function. Edge detection and colour extraction image analysis for bulbar hyperaemia (r2 = 0.35-0.99), palpebral hyperaemia (r2 = 0.71-0.99), palpebral roughness (r2 = 0.30-0.94), and corneal staining (r2 = 0.57-0.99) correlated well with scale grades, although the increments varied in magnitude and direction between different scales. Repeated image analysis measures had a 95% confidence interval of between 0.02 (colour extraction) and 0.10 (edge detection) scale units (on a 0-4 scale). Conclusion: The printed grading scales were more sensitive for grading features of low severity, but grades were not comparable between grading scales. Palpebral hyperaemia and staining grading is complicated by the variable presentations possible. Image analysis techniques are 6-35 times more repeatable than subjective grading, with a sensitivity of 1.2-2.8% of the scale.