9 resultados para Fuzzy K Nearest Neighbor

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider general d-dimensional lattice ferromagnetic spin systems with nearest neighbor interactions in the high temperature region ('beta' << 1). Each model is characterized by a single site apriori spin distribution taken to be even. We also take the parameter 'alfa' = ('S POT.4') - 3 '(S POT.2') POT.2' > 0, i.e. in the region which we call Gaussian subjugation, where ('S POT.K') denotes the kth moment of the apriori distribution. Associated with the model is a lattice quantum field theory known to contain a particle of asymptotic mass -ln 'beta' and a bound state below the two-particle threshold. We develop a 'beta' analytic perturbation theory for the binding energy of this bound state. As a key ingredient in obtaining our result we show that the Fourier transform of the two-point function is a meromorphic function, with a simple pole, in a suitable complex spectral parameter and the coefficients of its Laurent expansion are analytic in 'beta'.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is a well-established fact that statistical properties of energy-level spectra are the most efficient tool to characterize nonintegrable quantum systems. The statistical behavior of different systems such as complex atoms, atomic nuclei, two-dimensional Hamiltonians, quantum billiards, and noninteracting many bosons has been studied. The study of statistical properties and spectral fluctuations in interacting many-boson systems has developed interest in this direction. We are especially interested in weakly interacting trapped bosons in the context of Bose-Einstein condensation (BEC) as the energy spectrum shows a transition from a collective nature to a single-particle nature with an increase in the number of levels. However this has received less attention as it is believed that the system may exhibit Poisson-like fluctuations due to the existence of an external harmonic trap. Here we compute numerically the energy levels of the zero-temperature many-boson systems which are weakly interacting through the van der Waals potential and are confined in the three-dimensional harmonic potential. We study the nearest-neighbor spacing distribution and the spectral rigidity by unfolding the spectrum. It is found that an increase in the number of energy levels for repulsive BEC induces a transition from a Wigner-like form displaying level repulsion to the Poisson distribution for P(s). It does not follow the Gaussian orthogonal ensemble prediction. For repulsive interaction, the lower levels are correlated and manifest level-repulsion. For intermediate levels P(s) shows mixed statistics, which clearly signifies the existence of two energy scales: external trap and interatomic interaction, whereas for very high levels the trapping potential dominates, generating a Poisson distribution. Comparison with mean-field results for lower levels are also presented. For attractive BEC near the critical point we observe the Shnirelman-like peak near s = 0, which signifies the presence of a large number of quasidegenerate states.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present paper has two goals. First to present a natural example of a new class of random fields which are the variable neighborhood random fields. The example we consider is a partially observed nearest neighbor binary Markov random field. The second goal is to establish sufficient conditions ensuring that the variable neighborhoods are almost surely finite. We discuss the relationship between the almost sure finiteness of the interaction neighborhoods and the presence/absence of phase transition of the underlying Markov random field. In the case where the underlying random field has no phase transition we show that the finiteness of neighborhoods depends on a specific relation between the noise level and the minimum values of the one-point specification of the Markov random field. The case in which there is phase transition is addressed in the frame of the ferromagnetic Ising model. We prove that the existence of infinite interaction neighborhoods depends on the phase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two versions of the threshold contact process ordinary and conservative - are studied on a square lattice. In the first, particles are created on active sites, those having at least two nearest neighbor sites occupied, and are annihilated spontaneously. In the conservative version, a particle jumps from its site to an active site. Mean-field analysis suggests the existence of a first-order phase transition, which is confirmed by Monte Carlo simulations. In the thermodynamic limit, the two versions are found to give the same results. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have performed an ab initio theoretical investigation of substitutional Mn(Zn) atoms in planar structures of ZnO, viz., monolayer [(ZnO)(1)] and bilayer [(ZnO)(2)] systems. Due to the 2-D quantum confinement effects, in those Mn -doped (ZnO)(1) and (ZnO)(2) structures, the antiferromagnetic (AFM) coupling between (nearest neighbor) Mn(Zn) impurities have been strengthened when compared with the one in ZnO bulk systems. On the other hand, we find that the magnetic state of these systems can be tuned from AFM to FM by adding holes, which can be supplied by a p-type doping or even photoionization processes. Whereas, upon addition of electrons (n-type doping), the system keeps its AFM configuration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider an interacting particle system representing the spread of a rumor by agents on the d-dimensional integer lattice. Each agent may be in any of the three states belonging to the set {0,1,2}. Here 0 stands for ignorants, 1 for spreaders and 2 for stiflers. A spreader tells the rumor to any of its (nearest) ignorant neighbors at rate lambda. At rate alpha a spreader becomes a stifler due to the action of other (nearest neighbor) spreaders. Finally, spreaders and stiflers forget the rumor at rate one. We study sufficient conditions under which the rumor either becomes extinct or survives with positive probability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present and describe a catalog of galaxy photometric redshifts (photo-z) for the Sloan Digital Sky Survey (SDSS) Co-add Data. We use the artificial neural network (ANN) technique to calculate the photo-z and the nearest neighbor error method to estimate photo-z errors for similar to 13 million objects classified as galaxies in the co-add with r < 24.5. The photo-z and photo-z error estimators are trained and validated on a sample of similar to 83,000 galaxies that have SDSS photometry and spectroscopic redshifts measured by the SDSS Data Release 7 (DR7), the Canadian Network for Observational Cosmology Field Galaxy Survey, the Deep Extragalactic Evolutionary Probe Data Release 3, the VIsible imaging Multi-Object Spectrograph-Very Large Telescope Deep Survey, and the WiggleZ Dark Energy Survey. For the best ANN methods we have tried, we find that 68% of the galaxies in the validation set have a photo-z error smaller than sigma(68) = 0.031. After presenting our results and quality tests, we provide a short guide for users accessing the public data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.