961 resultados para Cluster-model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an approach to optical quantum computation in which a deterministic entangling quantum gate may be performed using, on average, a few hundred coherently interacting optical elements (beam splitters, phase shifters, single photon sources, and photodetectors with feedforward). This scheme combines ideas from the optical quantum computing proposal of Knill, Laflamme, and Milburn [Nature (London) 409, 46 (2001)], and the abstract cluster-state model of quantum computation proposed by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Normal mixture models are often used to cluster continuous data. However, conventional approaches for fitting these models will have problems in producing nonsingular estimates of the component-covariance matrices when the dimension of the observations is large relative to the number of observations. In this case, methods such as principal components analysis (PCA) and the mixture of factor analyzers model can be adopted to avoid these estimation problems. We examine these approaches applied to the Cabernet wine data set of Ashenfelter (1999), considering the clustering of both the wines and the judges, and comparing our results with another analysis. The mixture of factor analyzers model proves particularly effective in clustering the wines, accurately classifying many of the wines by location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The demand for palliative care is increasing, yet there are few data on the best models of care nor well-validated interventions that translate current evidence into clinical practice. Supporting multidisciplinary patient-centered palliative care while successfully conducting a large clinical trial is a challenge. The Palliative Care Trial (PCT) is a pragmatic 2 x 2 x 2 factorial cluster randomized controlled trial that tests the ability of educational outreach visiting and case conferencing to improve patient-based outcomes such as performance status and pain intensity. Four hundred sixty-one consenting patients and their general practitioners (GPs) were randomized to the following: (1) GP educational outreach visiting versus usual care, (2) Structured patient and caregiver educational outreach visiting versus usual care and (3) A coordinated palliative care model of case conferencing versus the standard model of palliative care in Adelaide, South Australia (3:1 randomization). Main outcome measures included patient functional status over time, pain intensity, and resource utilization. Participants were followed longitudinally until death or November 30, 2004. The interventions are aimed at translating current evidence into clinical practice and there was particular attention in the trial's design to addressing common pitfalls for clinical studies in palliative care. Given the need for evidence about optimal interventions and service delivery models that improve the care of people with life-limiting illness, the results of this rigorous, high quality clinical trial will inform practice. Initial results are expected in mid 2005. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article is a short introduction to and review of the cluster-state model of quantum computation, in which coherent quantum information processing is accomplished via a sequence of single-qubit measurements applied to a fixed quantum state known as a cluster state. We also discuss a few novel properties of the model, including a proof that the cluster state cannot occur as the exact ground state of any naturally occurring physical system, and a proof that measurements on any quantum state which is linearly prepared in one dimension can be efficiently simulated on a classical computer, and thus are not candidates for use as a substrate for quantum computation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Changes in residential accommodation models for adults with intellectual disability (ID) over the last 20 years in Australia, the United Kingdom and the United States have involved relocation from institutions primarily into dispersed homes in the community. But an evolving alternative service style is the cluster centre. This paper reports on the relocation of a matched group of 30 pairs of adults with moderate and severe IDs and challenging behaviour who were relocated from an institution into either dispersed housing in the community or cluster centres but under the same residential service philosophy. Adaptive and maladaptive behaviour, choice-making and objective life quality were assessed prior to leaving the institution and then after 12 and 24 months of living in the new residential model. Adaptive behaviour, choice-making and life quality increased for both groups and there was no change in level of maladaptive behaviour compared with levels exhibited in the institution. However, there were some significant differences between the community and cluster centre group as the community group increased some adaptive skills, choice-making and objective life quality to a greater extent than the cluster centre group. Both cluster centre and dispersed community living offer lifestyle and skill development advantages compared with opportunities available in large residential institutions. Dispersed community houses, however, offer increased opportunities for choice-making, acquisition of adaptive behaviours and improved life quality for long-term institutionalized adults with IDs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have discovered nine ultracompact dwarf galaxies (UCDs) in the Virgo Cluster, extending samples of these objects outside the Fornax Cluster. Using the Two Degree Field (2dF) multifiber spectrograph on the Anglo-Australian Telescope, the new Virgo members were found among 1500 color-selected, starlike targets with 16: 0 < b(j) < 20.2 in a 2 degrees diameter field centered on M87 (NGC 4486). The newly found UCDs are comparable to the UCDs in the Fornax Cluster, with sizes less than or similar to 100 pc, -12.9 < M-B < -10.7, and exhibiting red absorption-line spectra, indicative of an older stellar population. The properties of these objects remain consistent with the tidal threshing model for the origin of UCDs from the surviving nuclei of nucleated dwarf elliptical galaxies disrupted in the cluster core but can also be explained as objects that were formed by mergers of star clusters created in galaxy interactions. The discovery that UCDs exist in Virgo shows that this galaxy type is probably a ubiquitous phenomenon in clusters of galaxies; coupled with their possible origin by tidal threshing, the UCD population is a potential indicator and probe of the formation history of a given cluster. We also describe one additional bright UCD with M-B = -12.0 in the core of the Fornax Cluster. We find no further UCDs in our Fornax Cluster Spectroscopic Survey down to bj 19.5 in two additional 2dF fields extending as far as 3 degrees from the center of the cluster. All six Fornax bright UCDs identified with 2dF lie within 0.degrees 5 (projected distance of 170 kpc) of the central elliptical galaxy NGC 1399.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we do a detailed numerical investigation of the fault-tolerant threshold for optical cluster-state quantum computation. Our noise model allows both photon loss and depolarizing noise, as a general proxy for all types of local noise other than photon loss noise. We obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible in the combined presence of both noise types, provided that the loss probability is less than 3 X 10(-3) and the depolarization probability is less than 10(-4). Our fault-tolerant protocol involves a number of innovations, including a method for syndrome extraction known as telecorrection, whereby repeated syndrome measurements are guaranteed to agree. This paper is an extended version of Dawson.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the analysis of the spectroscopic and photometric catalogues of 11 X-ray luminous clusters at 0.07 < z < 0.16 from the Las Campanas/Anglo-Australian Telescope Rich Cluster Survey. Our spectroscopic data set consists of over 1600 galaxy cluster members, of which two-thirds are outside r(200). These spectra allow us to assign cluster membership using a detailed mass model and expand on our previous work on the cluster colour-magnitude relation ( CMR) where membership was inferred statistically. We confirm that the modal colours of galaxies on the CMR become progressively bluer with increasing radius d( B - R)/dr(p) = - 0.011 +/- 0.003 and with decreasing local galaxy density d( B - R)/dlog ( Sigma)= - 0.062 +/- 0.009. Interpreted as an age effect, we hypothesize that these trends in galaxy colour should be reflected in mean H delta equivalent width. We confirm that passive galaxies in the cluster increase in Hd line strength as dH delta/dr(p) = 0.35 +/- 0.06. Therefore, those galaxies in the cluster outskirts may have younger luminosity-weighted stellar populations; up to 3 Gyr younger than those in the cluster centre assuming d( B - R)/dt = 0.03 mag per Gyr. A variation of star formation rate, as measured by [ O II]lambda 3727 angstrom, with increasing local density of the environment is discernible and is shown to be in broad agreement with previous studies from the 2dF Galaxy Redshift Survey and the Sloan Digital Sky Survey. We divide our spectra into a variety of types based upon the MORPHs classification scheme. We find that clusters at z similar to 0.1 are less active than their higher-redshift analogues: about 60 per cent of the cluster galaxy population is non-star forming, with a further 20 per cent in the post-starburst class and 20 per cent in the currently active class, demonstrating that evolution is visible within the past 2 - 3 Gyr. We also investigate unusual populations of blue and very red non-star forming galaxies and we suggest that the former are likely to be the progenitors of galaxies which will lie on the CMR, while the colours of the latter possibly reflect dust reddening. We show that the cluster galaxies at large radii consist of both backsplash ones and those that are infalling to the cluster for the first time. We make a comparison to the field population at z similar to 0.1 and examine the broad differences between the two populations. Individually, the clusters show significant variation in their galaxy populations which we suggest reflects their recent infall histories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a generalization of the cluster-state model of quantum computation to continuous-variable systems, along with a proposal for an optical implementation using squeezed-light sources, linear optics, and homodyne detection. For universal quantum computation, a nonlinear element is required. This can be satisfied by adding to the toolbox any single-mode non-Gaussian measurement, while the initial cluster state itself remains Gaussian. Homodyne detection alone suffices to perform an arbitrary multimode Gaussian transformation via the cluster state. We also propose an experiment to demonstrate cluster-based error reduction when implementing Gaussian operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers a model-based approach to the clustering of tissue samples of a very large number of genes from microarray experiments. It is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. Frequently in practice, there are also clinical data available on those cases on which the tissue samples have been obtained. Here we investigate how to use the clinical data in conjunction with the microarray gene expression data to cluster the tissue samples. We propose two mixture model-based approaches in which the number of components in the mixture model corresponds to the number of clusters to be imposed on the tissue samples. One approach specifies the components of the mixture model to be the conditional distributions of the microarray data given the clinical data with the mixing proportions also conditioned on the latter data. Another takes the components of the mixture model to represent the joint distributions of the clinical and microarray data. The approaches are demonstrated on some breast cancer data, as studied recently in van't Veer et al. (2002).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been an increased demand for characterizing user access patterns using web mining techniques since the informative knowledge extracted from web server log files can not only offer benefits for web site structure improvement but also for better understanding of user navigational behavior. In this paper, we present a web usage mining method, which utilize web user usage and page linkage information to capture user access pattern based on Probabilistic Latent Semantic Analysis (PLSA) model. A specific probabilistic model analysis algorithm, EM algorithm, is applied to the integrated usage data to infer the latent semantic factors as well as generate user session clusters for revealing user access patterns. Experiments have been conducted on real world data set to validate the effectiveness of the proposed approach. The results have shown that the presented method is capable of characterizing the latent semantic factors and generating user profile in terms of weighted page vectors, which may reflect the common access interest exhibited by users among same session cluster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finite mixture models are being increasingly used to model the distributions of a wide variety of random phenomena. While normal mixture models are often used to cluster data sets of continuous multivariate data, a more robust clustering can be obtained by considering the t mixture model-based approach. Mixtures of factor analyzers enable model-based density estimation to be undertaken for high-dimensional data where the number of observations n is very large relative to their dimension p. As the approach using the multivariate normal family of distributions is sensitive to outliers, it is more robust to adopt the multivariate t family for the component error and factor distributions. The computational aspects associated with robustness and high dimensionality in these approaches to cluster analysis are discussed and illustrated.