895 resultados para discriminant analysis and cluster analysis
Resumo:
Most existing color-based tracking algorithms utilize the statistical color information of the object as the tracking clues, without maintaining the spatial structure within a single chromatic image. Recently, the researches on the multilinear algebra provide the possibility to hold the spatial structural relationship in a representation of the image ensembles. In this paper, a third-order color tensor is constructed to represent the object to be tracked. Considering the influence of the environment changing on the tracking, the biased discriminant analysis (BDA) is extended to the tensor biased discriminant analysis (TBDA) for distinguishing the object from the background. At the same time, an incremental scheme for the TBDA is developed for the tensor biased discriminant subspace online learning, which can be used to adapt to the appearance variant of both the object and background. The experimental results show that the proposed method can track objects precisely undergoing large pose, scale and lighting changes, as well as partial occlusion. © 2009 Elsevier B.V.
Resumo:
2000 Mathematics Subject Classification: 62H30, 62P99
Resumo:
2000 Mathematics Subject Classification: 62-04, 62H30, 62J20
Resumo:
2000 Mathematics Subject Classification: 62H30, 62J20, 62P12, 68T99
Resumo:
We discuss the properties of homogeneous and isotropic flat cosmologies in which the present accelerating stage is powered only by the gravitationally induced creation of cold dark matter (CCDM) particles (Omega(m) = 1). For some matter creation rates proposed in the literature, we show that the main cosmological functions such as the scale factor of the universe, the Hubble expansion rate, the growth factor, and the cluster formation rate are analytically defined. The best CCDM scenario has only one free parameter and our joint analysis involving baryonic acoustic oscillations + cosmic microwave background (CMB) + SNe Ia data yields (Omega) over tilde = 0.28 +/- 0.01 (1 sigma), where (Omega) over tilde (m) is the observed matter density parameter. In particular, this implies that the model has no dark energy but the part of the matter that is effectively clustering is in good agreement with the latest determinations from the large- scale structure. The growth of perturbation and the formation of galaxy clusters in such scenarios are also investigated. Despite the fact that both scenarios may share the same Hubble expansion, we find that matter creation cosmologies predict stronger small scale dynamics which implies a faster growth rate of perturbations with respect to the usual Lambda CDM cosmology. Such results point to the possibility of a crucial observational test confronting CCDM with Lambda CDM scenarios through a more detailed analysis involving CMB, weak lensing, as well as the large-scale structure.
Resumo:
We present a theory which permits for the first time a detailed analysis of the dependence of the absorption spectrum on atomic structure and cluster size. Thus, we determine the development of the collective excitations in small clusters and show that their broadening depends sensitively on the tomic structure, in particular at the surface. Results for Hg_n^+ clusters show that the plasmon energy is close to its jellium value in the case of spherical-like structures, but is in general between w_p/ \wurzel{3} and w_p/ \wurzel{2} for compact clusters. A particular success of our theory is the identification of the excitations contributing to the absorption peaks.
Resumo:
On 14 January 2001, the four Cluster spacecraft passed through the northern magnetospheric mantle in close conjunction to the EISCAT Svalbard Radar (ESR) and approached the post-noon dayside magnetopause over Greenland between 13:00 and 14:00 UT During that interval, a sudden reorganisation of the high-latitude dayside convection pattern accurred after 13:20 UT most likely caused by a direction change of the Solar wind magnetic field. The result was an eastward and poleward directed flow-channel, as monitored by the SuperDARN radar network and also by arrays of ground-based magnetometers in Canada, Greenland and Scandinavia. After an initial eastward and later poleward expansion of the flow-channel between 13:20 and 13:40 UT, the four Cluster spacecraft, and the field line footprints covered by the eastward looking scan cycle of the Sondre Stromfjord incoherent scatter radar were engulfed by cusp-like precipitation with transient magnetic and electric field signatures. In addition, the EISCAT Svalbard Radar detected strong transient effects of the convection reorganisation, a poleward moving precipitation, and a fast ion flow-channel in association with the auroral structures that suddenly formed to the west and north of the radar. From a detailed analysis of the coordinated Cluster and ground-based data, it was found that this extraordinary transient convection pattern, indeed, had moved the cusp precipitation from its former pre-noon position into the late post-noon sector, allowing for the first and quite unexpected encounter of the cusp by the Cluster spacecraft. Our findings illustrate the large amplitude of cusp dynamics even in response to moderate solar wind forcing. The global ground-based data proves to be an invaluable tool to monitor the dynamics and width of the affected magnetospheric regions.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Il presente lavoro di tesi si inserisce nell’ambito della classificazione di dati ad alta dimensionalità, sviluppando un algoritmo basato sul metodo della Discriminant Analysis. Esso classifica i campioni attraverso le variabili prese a coppie formando un network a partire da quelle che hanno una performance sufficientemente elevata. Successivamente, l’algoritmo si avvale di proprietà topologiche dei network (in particolare la ricerca di subnetwork e misure di centralità di singoli nodi) per ottenere varie signature (sottoinsiemi delle variabili iniziali) con performance ottimali di classificazione e caratterizzate da una bassa dimensionalità (dell’ordine di 101, inferiore di almeno un fattore 103 rispetto alle variabili di partenza nei problemi trattati). Per fare ciò, l’algoritmo comprende una parte di definizione del network e un’altra di selezione e riduzione della signature, calcolando ad ogni passaggio la nuova capacità di classificazione operando test di cross-validazione (k-fold o leave- one-out). Considerato l’alto numero di variabili coinvolte nei problemi trattati – dell’ordine di 104 – l’algoritmo è stato necessariamente implementato su High-Performance Computer, con lo sviluppo in parallelo delle parti più onerose del codice C++, nella fattispecie il calcolo vero e proprio del di- scriminante e il sorting finale dei risultati. L’applicazione qui studiata è a dati high-throughput in ambito genetico, riguardanti l’espressione genica a livello cellulare, settore in cui i database frequentemente sono costituiti da un numero elevato di variabili (104 −105) a fronte di un basso numero di campioni (101 −102). In campo medico-clinico, la determinazione di signature a bassa dimensionalità per la discriminazione e classificazione di campioni (e.g. sano/malato, responder/not-responder, ecc.) è un problema di fondamentale importanza, ad esempio per la messa a punto di strategie terapeutiche personalizzate per specifici sottogruppi di pazienti attraverso la realizzazione di kit diagnostici per l’analisi di profili di espressione applicabili su larga scala. L’analisi effettuata in questa tesi su vari tipi di dati reali mostra che il metodo proposto, anche in confronto ad altri metodi esistenti basati o me- no sull’approccio a network, fornisce performance ottime, tenendo conto del fatto che il metodo produce signature con elevate performance di classifica- zione e contemporaneamente mantenendo molto ridotto il numero di variabili utilizzate per questo scopo.
Resumo:
This study takes on the issue of political and socio-economic conditions for the hydrogen economy as part of a future low carbon society in Europe. It is subdivided into two parts. A first part reviews the current EU policy framework in view of its impact on hydrogen and fuel cell development. In the second part an analysis of the regional dynamics and possible hydrogen and fuel cell clusters is carried out. The current EU policy framework does not hinder hydrogen development. Yet it does not constitute a strong push factor either. EU energy policies have the strongest impact on hydrogen and fuel cell development even though their potential is still underexploited. Regulatory policies have a weak but positive impact on hydrogen. EU spending policies show some inconsistencies. Regions with a high activity level in HFC also are generally innovative regions. Moreover, the article points out certain industrial clusters that favours some regions' conditions for taking part in the HFC development. However, existing hydrogen infrastructure seems to play a minor role for region's engagement. An overall well-functioning regional innovation system is important in the formative phase of an HFC innovation system, but that further research is needed before qualified policy implications can be drawn. Looking ahead the current policy framework at EU level does not set clear long term signals and lacks incentives that are strong enough to facilitate high investment in and deployment of sustainable energy technologies. The likely overall effect thus seems to be too weak to enable the EU hydrogen and fuel cell deployment strategy. According to our analysis an enhanced EU policy framework pushing for sustainability in general and the development of hydrogen and fuel cells in particular requires the following: 1) A strong EU energy policy with credible long term targets; 2) better coordination of EU policies: Europe needs a common understanding of key taxation concepts (green taxation, internalisation of externalities) and a common approach for the market introduction of new energy technologies; 3) an EU cluster policy as an attempt to better coordinate and support of European regions in their efforts to further develop HFC and to set up the respective infrastructure.
Resumo:
Analysis of data without labels is commonly subject to scrutiny by unsupervised machine learning techniques. Such techniques provide more meaningful representations, useful for better understanding of a problem at hand, than by looking only at the data itself. Although abundant expert knowledge exists in many areas where unlabelled data is examined, such knowledge is rarely incorporated into automatic analysis. Incorporation of expert knowledge is frequently a matter of combining multiple data sources from disparate hypothetical spaces. In cases where such spaces belong to different data types, this task becomes even more challenging. In this paper we present a novel immune-inspired method that enables the fusion of such disparate types of data for a specific set of problems. We show that our method provides a better visual understanding of one hypothetical space with the help of data from another hypothetical space. We believe that our model has implications for the field of exploratory data analysis and knowledge discovery.
Resumo:
Context. The cosmic time around the z similar to 1 redshift range appears crucial in the cluster and galaxy evolution, since it is probably the epoch of the first mature galaxy clusters. Our knowledge of the properties of the galaxy populations in these clusters is limited because only a handful of z similar to 1 clusters are presently known. Aims. In this framework, we report the discovery of a z similar to 0.87 cluster and study its properties at various wavelengths. Methods. We gathered X-ray and optical data (imaging and spectroscopy), and near and far infrared data (imaging) in order to confirm the cluster nature of our candidate, to determine its dynamical state, and to give insight on its galaxy population evolution. Results. Our candidate structure appears to be a massive z similar to 0.87 dynamically young cluster with an atypically high X-ray temperature as compared to its X-ray luminosity. It exhibits a significant percentage (similar to 90%) of galaxies that are also detected in the 24 mu m band. Conclusions. The cluster RXJ1257.2+4738 appears to be still in the process of collapsing. Its relatively high temperature is probably the consequence of significant energy input into the intracluster medium besides the regular gravitational infall contribution. A significant part of its galaxies are red objects that are probably dusty with on-going star formation.
Resumo:
We calculate the equilibrium thermodynamic properties, percolation threshold, and cluster distribution functions for a model of associating colloids, which consists of hard spherical particles having on their surfaces three short-ranged attractive sites (sticky spots) of two different types, A and B. The thermodynamic properties are calculated using Wertheim's perturbation theory of associating fluids. This also allows us to find the onset of self-assembly, which can be quantified by the maxima of the specific heat at constant volume. The percolation threshold is derived, under the no-loop assumption, for the correlated bond model: In all cases it is two percolated phases that become identical at a critical point, when one exists. Finally, the cluster size distributions are calculated by mapping the model onto an effective model, characterized by a-state-dependent-functionality (f) over bar and unique bonding probability (p) over bar. The mapping is based on the asymptotic limit of the cluster distributions functions of the generic model and the effective parameters are defined through the requirement that the equilibrium cluster distributions of the true and effective models have the same number-averaged and weight-averaged sizes at all densities and temperatures. We also study the model numerically in the case where BB interactions are missing. In this limit, AB bonds either provide branching between A-chains (Y-junctions) if epsilon(AB)/epsilon(AA) is small, or drive the formation of a hyperbranched polymer if epsilon(AB)/epsilon(AA) is large. We find that the theoretical predictions describe quite accurately the numerical data, especially in the region where Y-junctions are present. There is fairly good agreement between theoretical and numerical results both for the thermodynamic (number of bonds and phase coexistence) and the connectivity properties of the model (cluster size distributions and percolation locus).
Resumo:
The long term evolution (LTE) is one of the latest standards in the mobile communications market. To achieve its performance, LTE networks use several techniques, such as multi-carrier technique, multiple-input-multiple-output and cooperative communications. Inside cooperative communications, this paper focuses on the fixed relaying technique, presenting a way for determining the best position to deploy the relay station (RS), from a set of empirical good solutions, and also to quantify the associated performance gain using different cluster size configurations. The best RS position was obtained through realistic simulations, which set it as the middle of the cell's circumference arc. Additionally, it also confirmed that network's performance is improved when the number of RSs is increased. It was possible to conclude that, for each deployed RS, the percentage of area served by an RS increases about 10 %. Furthermore, the mean data rate in the cell has been increased by approximately 60 % through the use of RSs. Finally, a given scenario with a larger number of RSs, can experience the same performance as an equivalent scenario without RSs, but with higher reuse distance. This conduces to a compromise solution between RS installation and cluster size, in order to maximize capacity, as well as performance.