951 resultados para vector quantization based Gaussian modeling
On-line Gaussian mixture density estimator for adaptive minimum bit-error-rate beamforming receivers
Resumo:
We develop an on-line Gaussian mixture density estimator (OGMDE) in the complex-valued domain to facilitate adaptive minimum bit-error-rate (MBER) beamforming receiver for multiple antenna based space-division multiple access systems. Specifically, the novel OGMDE is proposed to adaptively model the probability density function of the beamformer’s output by tracking the incoming data sample by sample. With the aid of the proposed OGMDE, our adaptive beamformer is capable of updating the beamformer’s weights sample by sample to directly minimize the achievable bit error rate (BER). We show that this OGMDE based MBER beamformer outperforms the existing on-line MBER beamformer, known as the least BER beamformer, in terms of both the convergence speed and the achievable BER.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
An LDA and probability-based classifier for the diagnosis of Alzheimer's Disease from structural MRI
Resumo:
In this paper a custom classification algorithm based on linear discriminant analysis and probability-based weights is implemented and applied to the hippocampus measurements of structural magnetic resonance images from healthy subjects and Alzheimer’s Disease sufferers; and then attempts to diagnose them as accurately as possible. The classifier works by classifying each measurement of a hippocampal volume as healthy controlsized or Alzheimer’s Disease-sized, these new features are then weighted and used to classify the subject as a healthy control or suffering from Alzheimer’s Disease. The preliminary results obtained reach an accuracy of 85.8% and this is a similar accuracy to state-of-the-art methods such as a Naive Bayes classifier and a Support Vector Machine. An advantage of the method proposed in this paper over the aforementioned state of the art classifiers is the descriptive ability of the classifications it produces. The descriptive model can be of great help to aid a doctor in the diagnosis of Alzheimer’s Disease, or even further the understand of how Alzheimer’s Disease affects the hippocampus.
Resumo:
Currently, multi-attribute auctions are becoming widespread awarding mechanisms for contracts in construction, and in these auctions, criteria other than price are taken into account for ranking bidder proposals. Therefore, being the lowest-price bidder is no longer a guarantee of being awarded, thus increasing the importance of measuring any bidder’s performance when not only the first position (lowest price) matters. Modeling position performance allows a tender manager to calculate the probability curves related to the more likely positions to be occupied by any bidder who enters a competitive auction irrespective of the actual number of future participating bidders. This paper details a practical methodology based on simple statistical calculations for modeling the performance of a single bidder or a group of bidders, constituting a useful resource for analyzing one’s own success while benchmarking potential bidding competitors.
Resumo:
The personalised conditioning system (PCS) is widely studied. Potentially, it is able to reduce energy consumption while securing occupants’ thermal comfort requirements. It has been suggested that automatic optimised operation schemes for PCS should be introduced to avoid energy wastage and discomfort caused by inappropriate operation. In certain automatic operation schemes, personalised thermal sensation models are applied as key components to help in setting targets for PCS operation. In this research, a novel personal thermal sensation modelling method based on the C-Support Vector Classification (C-SVC) algorithm has been developed for PCS control. The personal thermal sensation modelling has been regarded as a classification problem. During the modelling process, the method ‘learns’ an occupant’s thermal preferences from his/her feedback, environmental parameters and personal physiological and behavioural factors. The modelling method has been verified by comparing the actual thermal sensation vote (TSV) with the modelled one based on 20 individual cases. Furthermore, the accuracy of each individual thermal sensation model has been compared with the outcomes of the PMV model. The results indicate that the modelling method presented in this paper is an effective tool to model personal thermal sensations and could be integrated within the PCS for optimised system operation and control.
Resumo:
Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.
Resumo:
This paper describes a novel on-line learning approach for radial basis function (RBF) neural network. Based on an RBF network with individually tunable nodes and a fixed small model size, the weight vector is adjusted using the multi-innovation recursive least square algorithm on-line. When the residual error of the RBF network becomes large despite of the weight adaptation, an insignificant node with little contribution to the overall system is replaced by a new node. Structural parameters of the new node are optimized by proposed fast algorithms in order to significantly improve the modeling performance. The proposed scheme describes a novel, flexible, and fast way for on-line system identification problems. Simulation results show that the proposed approach can significantly outperform existing ones for nonstationary systems in particular.
Resumo:
Phylogenetic relationships among 21 species of mosquitoes in subgenus Nyssorhynchus were inferred from the nuclear white and mitochondrial NADH dehydrogenase subunit 6 (ND6) genes. Bayestan phylogenetic methods found that none of the three Sections within Nyssorhynchus (Albimanus, Argyritarsis, Myzorhynchella) were supported in all analyses, although Myzorhynchella was found to be monophyletic at the combined genes Within the Albimanus Section the monophyly of the Stroder Subgroup was strongly supported and within the Myzorhynchella Section Anopheles anrunesi and An lutzu formed a strongly supported monophyletic group The epidemiologically significant Albitarsis Complex showed evidence of paraphyly (relative to An lanet-Myzorhynchella) and discordance across gene trees, and the previously synonomized species of An. dunhami and An goeldii were recovered as sister species Finally, there was evidence of complexes in several species, including An antunesi, An deaneorum, and An. strodei (c) 2010 Elsevier B.V. All rights reserved
Resumo:
Measurements of down-welling microwave radiation from raining clouds performed with the Advanced Microwave Radiometer for Rain Identification (ADMIRARI) radiometer at 10.7-21-36.5 GHz during the Global Precipitation Measurement Ground Validation ""Cloud processes of the main precipitation systems in Brazil: A contribution to cloud resolving modeling and to the Global Precipitation Measurement"" (CHUVA) campaign held in Brazil in March 2010 represent a unique test bed for understanding three-dimensional (3D) effects in microwave radiative transfer processes. While the necessity of accounting for geometric effects is trivial given the slant observation geometry (ADMIRARI was pointing at a fixed 30 elevation angle), the polarization signal (i.e., the difference between the vertical and horizontal brightness temperatures) shows ubiquitousness of positive values both at 21.0 and 36.5 GHz in coincidence with high brightness temperatures. This signature is a genuine and unique microwave signature of radiation side leakage which cannot be explained in a 1D radiative transfer frame but necessitates the inclusion of three-dimensional scattering effects. We demonstrate these effects and interdependencies by analyzing two campaign case studies and by exploiting a sophisticated 3D radiative transfer suited for dichroic media like precipitating clouds.
Resumo:
FS CMa type stars are a recently described group of objects with the B[e] phenomenon which exhibits strong emission-line spectra and strong IR excesses. In this paper, we report the first attempt for a detailed modeling of IRAS 00470+6429, for which we have the best set of observations. Our modeling is based on two key assumptions: the star has a main-sequence luminosity for its spectral type (B2) and the circumstellar (CS) envelope is bimodal, composed of a slowly outflowing disklike wind and a fast polar wind. Both outflows are assumed to be purely radial. We adopt a novel approach to describe the dust formation site in the wind that employs timescale arguments for grain condensation and a self-consistent solution for the dust destruction surface. With the above assumptions we were able to satisfactorily reproduce many observational properties of IRAS 00470+6429, including the Hi line profiles and the overall shape of the spectral energy distribution. Our adopted recipe for dust formation proved successful in reproducing the correct amount of dust formed in the CS envelope. Possible shortcomings of our model, as well as suggestions for future improvements, are discussed.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
We describe the epidemiology of malaria in a frontier agricultural settlement in Brazilian Amazonia. We analysed the incidence of slide-confirmed symptomatic infections diagnosed between 2001 and 2006 in a cohort of 531 individuals (2281.53 person-years of follow-up) and parasite prevalence data derived from four cross-sectional surveys. Overall, the incidence rates of Plasmodium vivax and P. falciparaum were 20.6/100 and 6.8/100 person-years at risk, respectively, with a marked decline in the incidence of both species (81.4 and 56.8%, respectively) observed between 2001 and 2006. PCR revealed 5.4-fold more infections than conventional microscopy in population-wide cross-sectional surveys carried out between 2004 and 2006 (average prevalence, 11.3 vs. 2.0%). Only 27.2% of PCR-positive (but 73.3% of slide-positive) individuals had symptoms when enrolled, indicating that asymptomatic carriage of low-grade parasitaemias is a common phenomenon in frontier settlements. A circular cluster comprising 22.3% of the households, all situated in the area of most recent occupation, comprised 69.1% of all malaria infections diagnosed during the follow-up, with malaria incidence decreasing exponentially with distance from the cluster centre. By targeting one-quarter of the households, with selective indoor spraying or other house-protection measures, malaria incidence could be reduced by more than two-thirds in this community. (C) 2010 Royal Society of Tropical Medicine and Hygiene. Published by Elsevier Ltd. All rights reserved.
Resumo:
Here we introduce a new adenoviral vector where transgene expression is driven by p53. We first developed a synthetic promoter, referred to as PGTx beta containing a p53-responsive element, a minimal promoter and the first intron of the rabbit P-globin gene. Initial assays using plasmid-based vectors indicated that expression was tightly controlled by p53 and was 5-fold stronger than the constitutive CMV immediate early promoter/enhancer. The adenoviral vector, AdPG, was also shown to offer p53-responsive expression in prostate carcinoma cells LNCaP (wt p53), DU-145 (temperature sensitive mutant of p53) and PC3 (p53-null, but engineered to express temperature-sensitive p53 mutants). AdPG served as a sensor of p53 activity in LNCaP cells treated with chemotherapeutic agents. Since p53 can be induced by radiotherapy and chemotherapy, this new vector could be further developed for use in combination with conventional therapies to bring about cooperation between the genetic and pharmacologic treatment modalities. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
Various popular machine learning techniques, like support vector machines, are originally conceived for the solution of two-class (binary) classification problems. However, a large number of real problems present more than two classes. A common approach to generalize binary learning techniques to solve problems with more than two classes, also known as multiclass classification problems, consists of hierarchically decomposing the multiclass problem into multiple binary sub-problems, whose outputs are combined to define the predicted class. This strategy results in a tree of binary classifiers, where each internal node corresponds to a binary classifier distinguishing two groups of classes and the leaf nodes correspond to the problem classes. This paper investigates how measures of the separability between classes can be employed in the construction of binary-tree-based multiclass classifiers, adapting the decompositions performed to each particular multiclass problem. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a filter-based algorithm for feature selection. The filter is based on the partitioning of the set of features into clusters. The number of clusters, and consequently the cardinality of the subset of selected features, is automatically estimated from data. The computational complexity of the proposed algorithm is also investigated. A variant of this filter that considers feature-class correlations is also proposed for classification problems. Empirical results involving ten datasets illustrate the performance of the developed algorithm, which in general has obtained competitive results in terms of classification accuracy when compared to state of the art algorithms that find clusters of features. We show that, if computational efficiency is an important issue, then the proposed filter May be preferred over their counterparts, thus becoming eligible to join a pool of feature selection algorithms to be used in practice. As an additional contribution of this work, a theoretical framework is used to formally analyze some properties of feature selection methods that rely on finding clusters of features. (C) 2011 Elsevier Inc. All rights reserved.