850 resultados para High-dimensional data visualization
Resumo:
Thanks to advances in sensor technology, today we have many applications (space-borne imaging, medical imaging, etc.) where images of large sizes are generated. Straightforward application of wavelet techniques for above images involves certain difficulties. Embedded coders such as EZW and SPIHT require that the wavelet transform of the full image be buffered for coding. Since the transform coefficients also require storing in high precision, buffering requirements for large images become prohibitively high. In this paper, we first devise a technique for embedded coding of large images using zero trees with reduced memory requirements. A 'strip buffer' capable of holding few lines of wavelet coefficients from all the subbands belonging to the same spatial location is employed. A pipeline architecure for a line implementation of above technique is then proposed. Further, an efficient algorithm to extract an encoded bitstream corresponding to a region of interest in the image has also been developed. Finally, the paper describes a strip based non-embedded coding which uses a single pass algorithm. This is to handle high-input data rates. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Two solid state galvanic cells:Pt, Ni + Ni2Si04 + Si02/(Y203)Zr02/Ni + + NiO, Pt (1) and Pt, Ni + NizSiOj + Si02/CaF2/Ni + + NiO, Pt (11) have been employed for the determination of the Gibbs' energy of formation of nickel orthosilicate(Ni2Si04) from nickel oxide and quartz. The emf of cell (I) was reversible and reproducible in the temperature range 925 to 1375K whereas emf of cell (11) drifted with time and changed polarity. From the results of cell (I), the Gibbs' energy of formation of nickel silicate is obtained as,2Ni0 (r.s.) + Si02 (quartz) + Ni2Si04 (olivine)Gibbs' energy of formation of the spinel form of Ni2Si04 is obtained by combining the data for olivine obtained in this study with high pressure data on olivine to spinel transition reported in the literature. The complex time dependence of the emf of cell (11) can be rationalised on the basis of formation of calcium silicates from calcium oxide, generally present as an impurity in the calcium fluoride electrolyte, and silica. The emf of cell (11) is shown to be the function of the activity of calcium oxide at the electrolyte/ electrode interface. The results provide strong evidence against the recent suggestion of mixed anionic conduction in calcium fluoride.
Resumo:
An isothermal section of the phase diagram for (silver + rhodium + oxygen) at T = 1173 K has been established by equilibration of samples representing twelve different compositions, and phase identification after quenching by optical and scanning electron microscopy (s.e.m.), X-ray diffraction (x.r.d.), and energy dispersive analysis of X-rays (e.d.x.), Only one ternary oxide, AgRhO2, was found to be stable and a three phase region involving Ag, AgRhO2 and Rh2O3 was identified. The thermodynamic properties of AgRhO2 were measured using a galvanic cell in the temperature range 980 K to 1320 K. Yttria-stabilized zirconia was used as the solid electrolyte and pure oxygen gas at a pressure of 0.1 MPa was used as the reference electrode. The Gibbs free energy of formation of the ternary oxide from the elements, ΔfGo (AgRhO2), can be represented by two linear equations that join at the melting temperature of silver. In the temperature range 980 K to 1235 K, ΔfGo(AgRhO2)/(J . mol-1) = -249080 + 179.08 T/K (±120). Above the melting temperature of silver, in the temperature range 1235 K to 1320 K, ΔfGo(AgRhO2)/(J . mol-1) = -260400 + 188.24 T/K (±95). The thermodynamic properties of AgRhO2 at T = 298.15 K were evaluated from the high temperature data. The chemical potential diagram for (silver + rhodium + oxygen) at T = 1200 K was also computed on the basis of the results of this study.
Resumo:
We present a heterogeneous finite element method for the solution of a high-dimensional population balance equation, which depends both the physical and the internal property coordinates. The proposed scheme tackles the two main difficulties in the finite element solution of population balance equation: (i) spatial discretization with the standard finite elements, when the dimension of the equation is more than three, (ii) spurious oscillations in the solution induced by standard Galerkin approximation due to pure advection in the internal property coordinates. The key idea is to split the high-dimensional population balance equation into two low-dimensional equations, and discretize the low-dimensional equations separately. In the proposed splitting scheme, the shape of the physical domain can be arbitrary, and different discretizations can be applied to the low-dimensional equations. In particular, we discretize the physical and internal spaces with the standard Galerkin and Streamline Upwind Petrov Galerkin (SUPG) finite elements, respectively. The stability and error estimates of the Galerkin/SUPG finite element discretization of the population balance equation are derived. It is shown that a slightly more regularity, i.e. the mixed partial derivatives of the solution has to be bounded, is necessary for the optimal order of convergence. Numerical results are presented to support the analysis.
Resumo:
The velocity scale inside an acoustically levitated droplet depends on the levitator and liquid properties. Using Particle Imaging Velocimetry (PIV), detailed velocity measurements have been made in a levitated droplet of different diameters and viscosity. The maximum velocity and rotation are normalized using frequency and amplitude of acoustic levitator, and droplet viscosity. The non-dimensional data are fitted for micrometer- and millimeter-sized droplets levitated in different levitators for different viscosity fluids. It is also shown that the rotational speed of nanosilica droplets at an advanced stage of vaporization compares well with that predicted by exponentially fitted parameters. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
When document corpus is very large, we often need to reduce the number of features. But it is not possible to apply conventional Non-negative Matrix Factorization(NMF) on billion by million matrix as the matrix may not fit in memory. Here we present novel Online NMF algorithm. Using Online NMF, we reduced original high-dimensional space to low-dimensional space. Then we cluster all the documents in reduced dimension using k-means algorithm. We experimentally show that by processing small subsets of documents we will be able to achieve good performance. The method proposed outperforms existing algorithms.
Resumo:
Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.
Resumo:
Motivated by the discrepancies noted recently between the theoretical calculations of the electromagnetic omega pi form factor and certain experimental data, we investigate this form factor using analyticity and unitarity in a framework known as the method of unitarity bounds. We use a QCD correlator computed on the spacelike axis by operator product expansion and perturbative QCD as input, and exploit unitarity and the positivity of its spectral function, including the two-pion contribution that can be reliably calculated using high-precision data on the pion form factor. From this information, we derive upper and lower bounds on the modulus of the omega pi form factor in the elastic region. The results provide a significant check on those obtained with standard dispersion relations, confirming the existence of a disagreement with experimental data in the region around 0.6 GeV.
Resumo:
Surface electrodes in Electrical Impedance Tomography (EIT) phantoms usually reduce the SNR of the boundary potential data due to their design and development errors. A novel gold sensors array with high geometric precision is developed for EIT phantoms to improve the resistivity image quality. Gold thin films are deposited on a flexible FR4 sheet using electro-deposition process to make a sixteen electrode array with electrodes of identical geometry. A real tissue gold electrode phantom is developed with chicken tissue paste and the fat cylinders as the inhomogeneity. Boundary data are collected using a USB based high speed data acquisition system in a LabVIEW platform for different inhomogeneity positions. Resistivity images are reconstructed using EIDORS and compared with identical stainless steel electrode systems. Image contrast parameters are calculated from the resistivity matrix and the reconstructed images are evaluated for both the phantoms. Image contrast and image resolution of resistivity images are improved with gold electrode array.
Resumo:
Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application's throughput. In this paper we propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based lookahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from Amazon AWS IaaS public cloud. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.
Resumo:
Selection of relevant features is an open problem in Brain-computer interfacing (BCI) research. Sometimes, features extracted from brain signals are high dimensional which in turn affects the accuracy of the classifier. Selection of the most relevant features improves the performance of the classifier and reduces the computational cost of the system. In this study, we have used a combination of Bacterial Foraging Optimization and Learning Automata to determine the best subset of features from a given motor imagery electroencephalography (EEG) based BCI dataset. Here, we have employed Discrete Wavelet Transform to obtain a high dimensional feature set and classified it by Distance Likelihood Ratio Test. Our proposed feature selector produced an accuracy of 80.291% in 216 seconds.
Resumo:
In this paper we introduce a weighted complex networks model to investigate and recognize structures of patterns. The regular treating in pattern recognition models is to describe each pattern as a high-dimensional vector which however is insufficient to express the structural information. Thus, a number of methods are developed to extract the structural information, such as different feature extraction algorithms used in pre-processing steps, or the local receptive fields in convolutional networks. In our model, each pattern is attributed to a weighted complex network, whose topology represents the structure of that pattern. Based upon the training samples, we get several prototypal complex networks which could stand for the general structural characteristics of patterns in different categories. We use these prototypal networks to recognize the unknown patterns. It is an attempt to use complex networks in pattern recognition, and our result shows the potential for real-world pattern recognition. A spatial parameter is introduced to get the optimal recognition accuracy, and it remains constant insensitive to the amount of training samples. We have discussed the interesting properties of the prototypal networks. An approximate linear relation is found between the strength and color of vertexes, in which we could compare the structural difference between each category. We have visualized these prototypal networks to show that their topology indeed represents the common characteristics of patterns. We have also shown that the asymmetric strength distribution in these prototypal networks brings high robustness for recognition. Our study may cast a light on understanding the mechanism of the biologic neuronal systems in object recognition as well.
Resumo:
The Alliance for Coastal Technologies (ACT) Workshop "Making Oxygen Measurements Routine Like Temperature" was convened in St. Petersburg, Florida, January 4th - 6th, 2006. This event was sponsored by the University of South Florida (USF) College of Marine Science, an ACT partner institution and co-hosted by the Ocean Research Interactive Observatory Networks (ORION). Participants from researcldacademia, resource management, industry, and engineering sectors collaborated with the aim to foster ideas and information on how to make measuring dissolved oxygen a routine part of a coastal or open ocean observing system. Plans are in motion to develop large scale ocean observing systems as part of the US Integrated Ocean Observing System (100s; see http://ocean.us) and the NSF Ocean Observatory Initiative (001; see http://www.orionprogram.org/00I/default.hl). These systems will require biological and chemical sensors that can be deployed in large numbers, with high reliability, and for extended periods of time (years). It is also likely that the development cycle for new sensors is sufficiently long enough that completely new instruments, which operate on novel principles, cannot be developed before these complex observing systems will be deployed. The most likely path to development of robust, reliable, high endurance sensors in the near future is to move the current generation of sensors to a much greater degree of readiness. The ACT Oxygen Sensor Technology Evaluation demonstrated two important facts that are related to the need for sensors. There is a suite of commercially available sensors that can, in some circumstances, generate high quality data; however, the evaluation also showed that none of the sensors were able to generate high quality data in all circumstances for even one month time periods due to biofouling issues. Many groups are attempting to use oxygen sensors in large observing programs; however, there often seems to be limited communication between these groups and they often do not have access to sophisticated engineering resources. Instrument manufacturers also do not have sufficient resources to bring sensors, which are marketable, but of limited endurance or reliability, to a higher state of readiness. The goal of this ACT/ORION Oxygen Sensor Workshop was to bring together a group of experienced oceanographers who are now deploying oxygen sensors in extended arrays along with a core of experienced and interested academic and industrial engineers, and manufacturers. The intended direction for this workshop was for this group to exchange information accumulated through a variety of sensor deployments, examine failure mechanisms and explore a variety of potential solutions to these problems. One anticipated outcome was for there to be focused recommendations to funding agencies on development needs and potential solutions for 02 sensors. (pdf contains 19 pages)
Resumo:
Abstract This paper presents a hybrid heuristic{triangle evolution (TE) for global optimization. It is a real coded evolutionary algorithm. As in di®erential evolution (DE), TE targets each individual in current population and attempts to replace it by a new better individual. However, the way of generating new individuals is di®erent. TE generates new individuals in a Nelder- Mead way, while the simplices used in TE is 1 or 2 dimensional. The proposed algorithm is very easy to use and e±cient for global optimization problems with continuous variables. Moreover, it requires only one (explicit) control parameter. Numerical results show that the new algorithm is comparable with DE for low dimensional problems but it outperforms DE for high dimensional problems.
Resumo:
In response to infection or tissue dysfunction, immune cells develop into highly heterogeneous repertoires with diverse functions. Capturing the full spectrum of these functions requires analysis of large numbers of effector molecules from single cells. However, currently only 3-5 functional proteins can be measured from single cells. We developed a single cell functional proteomics approach that integrates a microchip platform with multiplex cell purification. This approach can quantitate 20 proteins from >5,000 phenotypically pure single cells simultaneously. With a 1-million fold miniaturization, the system can detect down to ~100 molecules and requires only ~104 cells. Single cell functional proteomic analysis finds broad applications in basic, translational and clinical studies. In the three studies conducted, it yielded critical insights for understanding clinical cancer immunotherapy, inflammatory bowel disease (IBD) mechanism and hematopoietic stem cell (HSC) biology.
To study phenotypically defined cell populations, single cell barcode microchips were coupled with upstream multiplex cell purification based on up to 11 parameters. Statistical algorithms were developed to process and model the high dimensional readouts. This analysis evaluates rare cells and is versatile for various cells and proteins. (1) We conducted an immune monitoring study of a phase 2 cancer cellular immunotherapy clinical trial that used T-cell receptor (TCR) transgenic T cells as major therapeutics to treat metastatic melanoma. We evaluated the functional proteome of 4 antigen-specific, phenotypically defined T cell populations from peripheral blood of 3 patients across 8 time points. (2) Natural killer (NK) cells can play a protective role in chronic inflammation and their surface receptor – killer immunoglobulin-like receptor (KIR) – has been identified as a risk factor of IBD. We compared the functional behavior of NK cells that had differential KIR expressions. These NK cells were retrieved from the blood of 12 patients with different genetic backgrounds. (3) HSCs are the progenitors of immune cells and are thought to have no immediate functional capacity against pathogen. However, recent studies identified expression of Toll-like receptors (TLRs) on HSCs. We studied the functional capacity of HSCs upon TLR activation. The comparison of HSCs from wild-type mice against those from genetics knock-out mouse models elucidates the responding signaling pathway.
In all three cases, we observed profound functional heterogeneity within phenotypically defined cells. Polyfunctional cells that conduct multiple functions also produce those proteins in large amounts. They dominate the immune response. In the cancer immunotherapy, the strong cytotoxic and antitumor functions from transgenic TCR T cells contributed to a ~30% tumor reduction immediately after the therapy. However, this infused immune response disappeared within 2-3 weeks. Later on, some patients gained a second antitumor response, consisted of the emergence of endogenous antitumor cytotoxic T cells and their production of multiple antitumor functions. These patients showed more effective long-term tumor control. In the IBD mechanism study, we noticed that, compared with others, NK cells expressing KIR2DL3 receptor secreted a large array of effector proteins, such as TNF-α, CCLs and CXCLs. The functions from these cells regulated disease-contributing cells and protected host tissues. Their existence correlated with IBD disease susceptibility. In the HSC study, the HSCs exhibited functional capacity by producing TNF-α, IL-6 and GM-CSF. TLR stimulation activated the NF-κB signaling in HSCs. Single cell functional proteome contains rich information that is independent from the genome and transcriptome. In all three cases, functional proteomic evaluation uncovered critical biological insights that would not be resolved otherwise. The integrated single cell functional proteomic analysis constructed a detail kinetic picture of the immune response that took place during the clinical cancer immunotherapy. It revealed concrete functional evidence that connected genetics to IBD disease susceptibility. Further, it provided predictors that correlated with clinical responses and pathogenic outcomes.