993 resultados para Spectral approach
Resumo:
Array technologies have made it possible to record simultaneously the expression pattern of thousands of genes. A fundamental problem in the analysis of gene expression data is the identification of highly relevant genes that either discriminate between phenotypic labels or are important with respect to the cellular process studied in the experiment: for example cell cycle or heat shock in yeast experiments, chemical or genetic perturbations of mammalian cell lines, and genes involved in class discovery for human tumors. In this paper we focus on the task of unsupervised gene selection. The problem of selecting a small subset of genes is particularly challenging as the datasets involved are typically characterized by a very small sample size ?? the order of few tens of tissue samples ??d by a very large feature space as the number of genes tend to be in the high thousands. We propose a model independent approach which scores candidate gene selections using spectral properties of the candidate affinity matrix. The algorithm is very straightforward to implement yet contains a number of remarkable properties which guarantee consistent sparse selections. To illustrate the value of our approach we applied our algorithm on five different datasets. The first consists of time course data from four well studied Hematopoietic cell lines (HL-60, Jurkat, NB4, and U937). The other four datasets include three well studied treatment outcomes (large cell lymphoma, childhood medulloblastomas, breast tumors) and one unpublished dataset (lymph status). We compared our approach both with other unsupervised methods (SOM,PCA,GS) and with supervised methods (SNR,RMB,RFE). The results clearly show that our approach considerably outperforms all the other unsupervised approaches in our study, is competitive with supervised methods and in some case even outperforms supervised approaches.
Resumo:
"August 1982."
Resumo:
A weighted Bethe graph $B$ is obtained from a weighted generalized Bethe tree by identifying each set of children with the vertices of a graph belonging to a family $F$ of graphs. The operation of identifying the root vertex of each of $r$ weighted Bethe graphs to the vertices of a connected graph $\mathcal{R}$ of order $r$ is introduced as the $\mathcal{R}$-concatenation of a family of $r$ weighted Bethe graphs. It is shown that the Laplacian eigenvalues (when $F$ has arbitrary graphs) as well as the signless Laplacian and adjacency eigenvalues (when the graphs in $F$ are all regular) of the $\mathcal{R}$-concatenation of a family of weighted Bethe graphs can be computed (in a unified way) using the stable and low computational cost methods available for the determination of the eigenvalues of symmetric tridiagonal matrices. Unlike the previous results already obtained on this topic, the more general context of families of distinct weighted Bethe graphs is herein considered.
Resumo:
This work deals with the numerical simulation of air stripping process for the pre-treatment of groundwater used in human consumption. The model established in steady state presents an exponential solution that is used, together with the Tau Method, to get a spectral approach of the solution of the system of partial differential equations associated to the model in transient state.
Resumo:
This work deals with the numerical simulation of air stripping process for the pre-treatment of groundwater used in human consumption. The model established in steady state presents an exponential solution that is used, together with the Tau Method, to get a spectral approach of the solution of the system of partial differential equations associated to the model in transient state.
Resumo:
In principle the global mean geostrophic surface circulation of the ocean can be diagnosed by subtracting a geoid from a mean sea surface (MSS). However, because the resulting mean dynamic topography (MDT) is approximately two orders of magnitude smaller than either of the constituent surfaces, and because the geoid is most naturally expressed as a spectral model while the MSS is a gridded product, in practice complications arise. Two algorithms for combining MSS and satellite-derived geoid data to determine the ocean’s mean dynamic topography (MDT) are considered in this paper: a pointwise approach, whereby the gridded geoid height field is subtracted from the gridded MSS; and a spectral approach, whereby the spherical harmonic coefficients of the geoid are subtracted from an equivalent set of coefficients representing the MSS, from which the gridded MDT is then obtained. The essential difference is that with the latter approach the MSS is truncated, a form of filtering, just as with the geoid. This ensures that errors of omission resulting from the truncation of the geoid, which are small in comparison to the geoid but large in comparison to the MDT, are matched, and therefore negated, by similar errors of omission in the MSS. The MDTs produced by both methods require additional filtering. However, the spectral MDT requires less filtering to remove noise, and therefore it retains more oceanographic information than its pointwise equivalent. The spectral method also results in a more realistic MDT at coastlines. 1. Introduction An important challenge in oceanography is the accurate determination of the ocean’s time-mean dynamic topography (MDT). If this can be achieved with sufficient accuracy for combination with the timedependent component of the dynamic topography, obtainable from altimetric data, then the resulting sum (i.e., the absolute dynamic topography) will give an accurate picture of surface geostrophic currents and ocean transports.
Resumo:
QUAGMIRE is a quasi-geostrophic numerical model for performing fast, high-resolution simulations of multi-layer rotating annulus laboratory experiments on a desktop personal computer. The model uses a hybrid finite-difference/spectral approach to numerically integrate the coupled nonlinear partial differential equations of motion in cylindrical geometry in each layer. Version 1.3 implements the special case of two fluid layers of equal resting depths. The flow is forced either by a differentially rotating lid, or by relaxation to specified streamfunction or potential vorticity fields, or both. Dissipation is achieved through Ekman layer pumping and suction at the horizontal boundaries, including the internal interface. The effects of weak interfacial tension are included, as well as the linear topographic beta-effect and the quadratic centripetal beta-effect. Stochastic forcing may optionally be activated, to represent approximately the effects of random unresolved features. A leapfrog time stepping scheme is used, with a Robert filter. Flows simulated by the model agree well with those observed in the corresponding laboratory experiments.
Resumo:
The energy consumption by ICT (Information and Communication Technology) equipment is rapidly increasing which causes a significant economic and environmental problem. At present, the network infrastructure is becoming a large portion of the energy footprint in ICT. Thus the concept of energy efficient or green networking has been introduced. Now one of the main concerns of network industry is to minimize energy consumption of network infrastructure because of the potential economic benefits, ethical responsibility, and its environmental impact. In this paper, the energy management strategies to reduce the energy consumed by network switches in LAN (Local Area Network) have been developed. According to the lifecycle assessment of network switches, during usage phase, the highest amount of energy consumed. The study considers bandwidth, link load and traffic matrixes as input parameters which have the highest contribution in energy footprint of network switches during usage phase and energy consumption as output. Then with the objective of reducing energy usage of network infrastructure, the feasibility of putting Ethernet switches hibernate or sleep mode was investigated. After that, the network topology was reorganized using clustering method based on the spectral approach for putting network switches to hibernate or switched off mode considering the time and communications among them. Experimental results show the interest of this approach in terms of energy consumption
Resumo:
La « pensée mixte » est une approche de la composition caractérisée par l’interaction de trois pensées: la pensée instrumentale, la pensée électroacoustique et la pensée informatique. Elle prend la forme d’un réseau où le compositeur fait des aller-retours entre les trois pensées et réalise des équivalences paramétriques. La pensée instrumentale se rattache à la tradition de l’écriture occidentale, la pensée électroacoustique fait allusion aux pratiques du studio analogique et de la musique acousmatique, et la pensée informatique fait référence aux pratiques numériques de la programmation visuelle et de l’analyse spectrale. Des lieux communs existent où s’opèrent l’interaction des trois pensées: la notion du studio instrumental de Ivo Malec, la notion de musique concrète instrumentale de Helmut Lachenmann, la composition assistée par ordinateur, la musique spectrale, l’approche instrumentale par montage, la musique acousmatique s’inspirant de la tradition musicale écrite et les musiques mixtes. Ces domaines constituent les influences autour desquelles j’ai composé un corpus de deux cycles d’œuvres: Les Larmes du Scaphandre et le Nano-Cosmos. L’analyse des œuvres met en évidence la notion de « pensée mixte » en abordant la pensée électroacoustique dans ma pratique instrumentale, la pensée informatique dans ma pratique musicale, et la pensée instrumentale dans ma pratique électroacoustique.
Resumo:
La « pensée mixte » est une approche de la composition caractérisée par l’interaction de trois pensées: la pensée instrumentale, la pensée électroacoustique et la pensée informatique. Elle prend la forme d’un réseau où le compositeur fait des aller-retours entre les trois pensées et réalise des équivalences paramétriques. La pensée instrumentale se rattache à la tradition de l’écriture occidentale, la pensée électroacoustique fait allusion aux pratiques du studio analogique et de la musique acousmatique, et la pensée informatique fait référence aux pratiques numériques de la programmation visuelle et de l’analyse spectrale. Des lieux communs existent où s’opèrent l’interaction des trois pensées: la notion du studio instrumental de Ivo Malec, la notion de musique concrète instrumentale de Helmut Lachenmann, la composition assistée par ordinateur, la musique spectrale, l’approche instrumentale par montage, la musique acousmatique s’inspirant de la tradition musicale écrite et les musiques mixtes. Ces domaines constituent les influences autour desquelles j’ai composé un corpus de deux cycles d’œuvres: Les Larmes du Scaphandre et le Nano-Cosmos. L’analyse des œuvres met en évidence la notion de « pensée mixte » en abordant la pensée électroacoustique dans ma pratique instrumentale, la pensée informatique dans ma pratique musicale, et la pensée instrumentale dans ma pratique électroacoustique.
Resumo:
This thesis Entitled Spectral theory of bounded self-adjoint operators -A linear algebraic approach.The main results of the thesis can be classified as three different approaches to the spectral approximation problems. The truncation method and its perturbed versions are part of the classical linear algebraic approach to the subject. The usage of block Toeplitz-Laurent operators and the matrix valued symbols is considered as a particular example where the linear algebraic techniques are effective in simplifying problems in inverse spectral theory. The abstract approach to the spectral approximation problems via pre-conditioners and Korovkin-type theorems is an attempt to make the computations involved, well conditioned. However, in all these approaches, linear algebra comes as the central object. The objective of this study is to discuss the linear algebraic techniques in the spectral theory of bounded self-adjoint operators on a separable Hilbert space. The usage of truncation method in approximating the bounds of essential spectrum and the discrete spectral values outside these bounds is well known. The spectral gap prediction and related results was proved in the second chapter. The discrete versions of Borg-type theorems, proved in the third chapter, partly overlap with some known results in operator theory. The pure linear algebraic approach is the main novelty of the results proved here.
Resumo:
The HIRDLS instrument contains 21 spectral channels spanning a wavelength range from 6 to 18mm. For each of these channels the spectral bandwidth and position are isolated by an interference bandpass filter at 301K placed at an intermediate focal plane of the instrument. A second filter cooled to 65K positioned at the same wavelength but designed with a wider bandwidth is placed directly in front of each cooled detector element to reduce stray radiation from internally reflected in-band signals, and to improve the out-of-band blocking. This paper describes the process of determining the spectral requirements for the two bandpass filters and the antireflection coatings used on the lenses and dewar window of the instrument. This process uses a system throughput performance approach taking the instrument spectral specification as a target. It takes into account the spectral characteristics of the transmissive optical materials, the relative spectral response of the detectors, thermal emission from the instrument, and the predicted atmospheric signal to determine the radiance profile for each channel. Using this design approach an optimal design for the filters can be achieved, minimising the number of layers to improve the in-band transmission and to aid manufacture. The use of this design method also permits the instrument spectral performance to be verified using the measured response from manufactured components. The spectral calculations for an example channel are discussed, together with the spreadsheet calculation method. All the contributions made by the spectrally active components to the resulting instrument channel throughput are identified and presented.
Resumo:
Soil degradation is a major problem in the agriculturally dominated country of Tajikistan, which makes it necessary to determine and monitor the state of soils. For this purpose a soil spectral library was established as it enables the determination of soil properties with relatively low costs and effort. A total of 1465 soil samples were collected from three 10x10 km test sites in western Tajikistan. The diffuse reflectance of the samples was measured with a FieldSpec PRO FR from ASD in the spectral range from 380 to 2500 nm in laboratory. 166 samples were finally selected based on their spectral information and analysed on total C and N, organic C, pH, CaCO₃, extractable P, exchangeable Ca, Mg and K, and the fractions clay, silt and sand. Multiple linear regression was used to set up the models. Two third of the chemically analysed samples were used to calibrate the models, one third was used for hold-out validation. Very good prediction accuracy was obtained for total C (R² = 0.76, RMSEP = 4.36 g kg⁻¹), total N (R² = 0.83, RMSEP = 0.30 g kg⁻¹) and organic C (R² = 0.81, RMSEP = 3.30 g kg⁻¹), good accuracy for pH (R² = 0.61, RMSEP = 0.157) and CaCO3(R² = 0.72, RMSEP = 4.63 %). No models could be developed for extractable P, exchangeable Ca, Mg and K, and the fractions clay, silt and sand. It can be concluded that the spectral library approach has a high potential to substitute standard laboratory methods where rapid and inexpensive analysis is required.
Resumo:
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T2.33TC.
Resumo:
We present a novel approach for the reconstruction of spectra from Euclidean correlator data that makes close contact to modern Bayesian concepts. It is based upon an axiomatically justified dimensionless prior distribution, which in the case of constant prior function m(ω) only imprints smoothness on the reconstructed spectrum. In addition we are able to analytically integrate out the only relevant overall hyper-parameter α in the prior, removing the necessity for Gaussian approximations found e.g. in the Maximum Entropy Method. Using a quasi-Newton minimizer and high-precision arithmetic, we are then able to find the unique global extremum of P[ρ|D] in the full Nω » Nτ dimensional search space. The method actually yields gradually improving reconstruction results if the quality of the supplied input data increases, without introducing artificial peak structures, often encountered in the MEM. To support these statements we present mock data analyses for the case of zero width delta peaks and more realistic scenarios, based on the perturbative Euclidean Wilson Loop as well as the Wilson Line correlator in Coulomb gauge.