906 resultados para Monocyte subsets
Resumo:
In document community support vector machines and naïve bayes classifier are known for their simplistic yet excellent performance. Normally the feature subsets used by these two approaches complement each other, however a little has been done to combine them. The essence of this paper is a linear classifier, very similar to these two. We propose a novel way of combining these two approaches, which synthesizes best of them into a hybrid model. We evaluate the proposed approach using 20ng dataset, and compare it with its counterparts. The efficacy of our results strongly corroborate the effectiveness of our approach.
Resumo:
When document corpus is very large, we often need to reduce the number of features. But it is not possible to apply conventional Non-negative Matrix Factorization(NMF) on billion by million matrix as the matrix may not fit in memory. Here we present novel Online NMF algorithm. Using Online NMF, we reduced original high-dimensional space to low-dimensional space. Then we cluster all the documents in reduced dimension using k-means algorithm. We experimentally show that by processing small subsets of documents we will be able to achieve good performance. The method proposed outperforms existing algorithms.
Resumo:
``The goal of this study was to examine the effect of maternal iron deficiency on the developing hippocampus in order to define a developmental window for this effect, and to see whether iron deficiency causes changes in glucocorticoid levels. The study was carried out using pre-natal, post-natal, and pre + post-natal iron deficiency paradigm. Iron deficient pregnant dams and their pups displayed elevated corticosterone which, in turn, differentially affected glucocorticoid receptor (GR) expression in the CA1 and the dentate gyrus. Brain Derived Neurotrophic Factor (BDNF) was reduced in the hippocampi of pups following elevated corticosterone levels. Reduced neurogenesis at P7 was seen in pups born to iron deficient mothers, and these pups had reduced numbers of hippocampal pyramidal and granule cells as adults. Hippocampal subdivision volumes also were altered. The structural and molecular defects in the pups were correlated with radial arm maze performance; reference memory function was especially affected. Pups from dams that were iron deficient throughout pregnancy and lactation displayed the complete spectrum of defects, while pups from dams that were iron deficient only during pregnancy or during lactation displayed subsets of defects. These findings show that maternal iron deficiency is associated with altered levels of corticosterone and GR expression, and with spatial memory deficits in their pups.'' (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Data clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning and data mining. Clustering is grouping of a data set or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common trait according to some defined distance measure. In this paper we present the genetically improved version of particle swarm optimization algorithm which is a population based heuristic search technique derived from the analysis of the particle swarm intelligence and the concepts of genetic algorithms (GA). The algorithm combines the concepts of PSO such as velocity and position update rules together with the concepts of GA such as selection, crossover and mutation. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The performance of our method is better than k-means and PSO algorithm.
Resumo:
Many meteorological phenomena occur at different locations simultaneously. These phenomena vary temporally and spatially. It is essential to track these multiple phenomena for accurate weather prediction. Efficient analysis require high-resolution simulations which can be conducted by introducing finer resolution nested simulations, nests at the locations of these phenomena. Simultaneous tracking of these multiple weather phenomena requires simultaneous execution of the nests on different subsets of the maximum number of processors for the main weather simulation. Dynamic variation in the number of these nests require efficient processor reallocation strategies. In this paper, we have developed strategies for efficient partitioning and repartitioning of the nests among the processors. As a case study, we consider an application of tracking multiple organized cloud clusters in tropical weather systems. We first present a parallel data analysis algorithm to detect such clouds. We have developed a tree-based hierarchical diffusion method which reallocates processors for the nests such that the redistribution cost is less. We achieve this by a novel tree reorganization approach. We show that our approach exhibits up to 25% lower redistribution cost and 53% lesser hop-bytes than the processor reallocation strategy that does not consider the existing processor allocation.
Resumo:
Granger causality is increasingly being applied to multi-electrode neurophysiological and functional imaging data to characterize directional interactions between neurons and brain regions. For a multivariate dataset, one might be interested in different subsets of the recorded neurons or brain regions. According to the current estimation framework, for each subset, one conducts a separate autoregressive model fitting process, introducing the potential for unwanted variability and uncertainty. In this paper, we propose a multivariate framework for estimating Granger causality. It is based on spectral density matrix factorization and offers the advantage that the estimation of such a matrix needs to be done only once for the entire multivariate dataset. For any subset of recorded data, Granger causality can be calculated through factorizing the appropriate submatrix of the overall spectral density matrix.
Resumo:
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential.
Resumo:
Global change is impacting forests worldwide, threatening biodiversity and ecosystem services including climate regulation. Understanding how forests respond is critical to forest conservation and climate protection. This review describes an international network of 59 long-term forest dynamics research sites (CTFS-ForestGEO) useful for characterizing forest responses to global change. Within very large plots (median size 25ha), all stems 1cm diameter are identified to species, mapped, and regularly recensused according to standardized protocols. CTFS-ForestGEO spans 25 degrees S-61 degrees N latitude, is generally representative of the range of bioclimatic, edaphic, and topographic conditions experienced by forests worldwide, and is the only forest monitoring network that applies a standardized protocol to each of the world's major forest biomes. Supplementary standardized measurements at subsets of the sites provide additional information on plants, animals, and ecosystem and environmental variables. CTFS-ForestGEO sites are experiencing multifaceted anthropogenic global change pressures including warming (average 0.61 degrees C), changes in precipitation (up to +/- 30% change), atmospheric deposition of nitrogen and sulfur compounds (up to 3.8g Nm(-2)yr(-1) and 3.1g Sm(-2)yr(-1)), and forest fragmentation in the surrounding landscape (up to 88% reduced tree cover within 5km). The broad suite of measurements made at CTFS-ForestGEO sites makes it possible to investigate the complex ways in which global change is impacting forest dynamics. Ongoing research across the CTFS-ForestGEO network is yielding insights into how and why the forests are changing, and continued monitoring will provide vital contributions to understanding worldwide forest diversity and dynamics in an era of global change.
Resumo:
We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows ( selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms that base their decision on the current ( or any finite past history of) system state, of the tail of the longest queue. To accomplish this, we employ novel analytical techniques for studying the performance of scheduling algorithms using partial state, which may be of independent interest. These include new sample-path large deviations results for processes obtained by non-random, predictable sampling of sequences of independent and identically distributed random variables. A consequence of these results is that scheduling with partial state information yields a rate function significantly different from scheduling with full channel information. In the special case when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel state information, Max-Exp reduces to simply serving the flow with the longest queue; thus, our results show that to always serve the longest queue in the absence of any channel state information is large deviations optimal.
Resumo:
In this paper, we propose a new state transition based embedding (STBE) technique for audio watermarking with high fidelity. Furthermore, we propose a new correlation based encoding (CBE) scheme for binary logo image in order to enhance the payload capacity. The result of CBE is also compared with standard run-length encoding (RLE) compression and Huffman schemes. Most of the watermarking algorithms are based on modulating selected transform domain feature of an audio segment in order to embed given watermark bit. In the proposed STBE method instead of modulating feature of each and every segment to embed data, our aim is to retain the default value of this feature for most of the segments. Thus, a high quality of watermarked audio is maintained. Here, the difference between the mean values (Mdiff) of insignificant complex cepstrum transform (CCT) coefficients of down-sampled subsets is selected as a robust feature for embedding. Mdiff values of the frames are changed only when certain conditions are met. Hence, almost 50% of the times, segments are not changed and still STBE can convey watermark information at receiver side. STBE also exhibits a partial restoration feature by which the watermarked audio can be restored partially after extraction of the watermark at detector side. The psychoacoustic model analysis showed that the noise-masking ratio (NMR) of our system is less than -10dB. As amplitude scaling in time domain does not affect selected insignificant CCT coefficients, strong invariance towards amplitude scaling attacks is also proved theoretically. Experimental results reveal that the proposed watermarking scheme maintains high audio quality and are simultaneously robust to general attacks like MP3 compression, amplitude scaling, additive noise, re-quantization, etc.
Resumo:
The 3-Hitting Set problem involves a family of subsets F of size at most three over an universe U. The goal is to find a subset of U of the smallest possible size that intersects every set in F. The version of the problem with parity constraints asks for a subset S of size at most k that, in addition to being a hitting set, also satisfies certain parity constraints on the sizes of the intersections of S with each set in the family F. In particular, an odd (even) set is a hitting set that hits every set at either one or three (two) elements, and a perfect code is a hitting set that intersects every set at exactly one element. These questions are of fundamental interest in many contexts for general set systems. Just as for Hitting Set, we find these questions to be interesting for the case of families consisting of sets of size at most three. In this work, we initiate an algorithmic study of these problems in this special case, focusing on a parameterized analysis. We show, for each problem, efficient fixed-parameter tractable algorithms using search trees that are tailor-made to the constraints in question, and also polynomial kernels using sunflower-like arguments in a manner that accounts for equivalence under the additional parity constraints.
Resumo:
The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.
Resumo:
We previously reported that Rv1860 protein from Mycobacterium tuberculosis stimulated CD4(+) and CD8(+) T cells secreting gamma interferon (IFN-gamma) in healthy purified protein derivative (PPD)-positive individuals and protected guinea pigs immunized with a DNA vaccine and a recombinant poxvirus expressing Rv1860 from a challenge with virulent M. tuberculosis. We now show Rv1860-specific polyfunctional T (PFT) cell responses in the blood of healthy latently M. tuberculosis-infected individuals dominated by CD8(+) T cells, using a panel of 32 overlapping peptides spanning the length of Rv1860. Multiple subsets of CD8(+) PFT cells were significantly more numerous in healthy latently infected volunteers (HV) than in tuberculosis (TB) patients (PAT). The responses of peripheral blood mononuclear cells (PBMC) from PAT to the peptides of Rv1860 were dominated by tumor necrosis factor alpha (TNF-alpha) and interleukin-10 (IL-10) secretions, the former coming predominantly from non-T cell sources. Notably, the pattern of the T cell response to Rv1860 was distinctly different from those of the widely studied M. tuberculosis antigens ESAT-6, CFP-10, Ag85A, and Ag85B, which elicited CD4(+) T cell-dominated responses as previously reported in other cohorts. We further identified a peptide spanning amino acids 21 to 39 of the Rv1860 protein with the potential to distinguish latent TB infection from disease due to its ability to stimulate differential cytokine signatures in HV and PAT. We suggest that a TB vaccine carrying these and other CD8(+) T-cell-stimulating antigens has the potential to prevent progression of latent M. tuberculosis infection to TB disease.
Resumo:
BACKGROUND: GABA(A) receptors are members of the Cys-loop family of neurotransmitter receptors, proteins which are responsible for fast synaptic transmission, and are the site of action of wide range of drugs. Recent work has shown that Cys-loop receptors are present on immune cells, but their physiological roles and the effects of drugs that modify their function in the innate immune system are currently unclear. We are interested in how and why anaesthetics increase infections in intensive care patients; a serious problem as more than 50% of patients with severe sepsis will die. As many anaesthetics act via GABA(A) receptors, the aim of this study was to determine if these receptors are present on immune cells, and could play a role in immunocompromising patients. PRINCIPAL FINDINGS: We demonstrate, using RT-PCR, that monocytes express GABA(A) receptors constructed of α1, α4, β2, γ1 and/or δ subunits. Whole cell patch clamp electrophysiological studies show that GABA can activate these receptors, resulting in the opening of a chloride-selective channel; activation is inhibited by the GABA(A) receptor antagonists bicuculline and picrotoxin, but not enhanced by the positive modulator diazepam. The anaesthetic drugs propofol and thiopental, which can act via GABA(A) receptors, impaired monocyte function in classic immunological chemotaxis and phagocytosis assays, an effect reversed by bicuculline and picrotoxin. SIGNIFICANCE: Our results show that functional GABA(A) receptors are present on monocytes with properties similar to CNS GABA(A) receptors. The functional data provide a possible explanation as to why chronic propofol and thiopental administration can increase the risk of infection in critically ill patients: their action on GABA(A) receptors inhibits normal monocyte behaviour. The data also suggest a potential solution: monocyte GABA(A) receptors are insensitive to diazepam, thus the use of benzodiazepines as an alternative anesthetising agent may be advantageous where infection is a life threatening problem.
Resumo:
Background: Glutamate excitotoxicity contributes to oligodendrocyte and tissue damage in multiple sclerosis (MS). Intriguingly, glutamate level in plasma and cerebrospinal fluid of MS patients is elevated, a feature which may be related to the pathophysiology of this disease. In addition to glutamate transporters, levels of extracellular glutamate are controlled by cystine/glutamate antiporter x(c)(-), an exchanger that provides intracellular cystine for production of glutathione, the major cellular antioxidant. The objective of this study was to analyze the role of the system x(c)(-) in glutamate homeostasis alterations in MS pathology. -- Methods: Primary cultures of human monocytes and the cell line U-937 were used to investigate the mechanism of glutamate release. Expression of cystine glutamate exchanger (xCT) was quantified by quantitative PCR, Western blot, flow cytometry and immunohistochemistry in monocytes in vitro, in animals with experimental autoimmune encephalomyelitis (EAE), the animal model of MS, and in samples of MS patients. -- Results and discussion: We show here that human activated monocytes release glutamate through cystine/glutamate antiporter x(c)(-) and that the expression of the catalytic subunit xCT is upregulated as a consequence of monocyte activation. In addition, xCT expression is also increased in EAE and in the disease proper. In the later, high expression of xCT occurs both in the central nervous system (CNS) and in peripheral blood cells. In particular, cells from monocyte-macrophage-microglia lineage have higher xCT expression in MS and in EAE, indicating that immune activation upregulates xCT levels, which may result in higher glutamate release and contribution to excitotoxic damage to oligodendrocytes. -- Conclusions: Together, these results reveal that increased expression of the cystine/glutamate antiporter system x(c)(-) in MS provides a link between inflammation and excitotoxicity in demyelinating diseases.