844 resultados para Sparse mixing matrix
Resumo:
Sparse-matrix sampling using commercially available crystallization screen kits has become the most popular way of determining the preliminary crystallization conditions for macromolecules. In this study, the efficiency of three commercial screening kits, Crystal Screen and Crystal Screen 2 (Hampton Research), Wizard Screens I and II (Emerald BioStructures) and Personal Structure Screens 1 and 2 (Molecular Dimensions), has been compared using a set of 19 diverse proteins. 18 proteins yielded crystals using at least one crystallization screen. Surprisingly, Crystal Screens and Personal Structure Screens showed dramatically different results, although most of the crystallization formulations are identical as listed by the manufacturers. Higher molecular weight polyethylene glycols and mixed precipitants were found to be the most effective precipitants in this study.
Resumo:
Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.
Resumo:
Natural Rubber Latex (NRL) can be used successfully in controlled release drug delivery due to their excellent matrix forming properties. Recently, NRL has shown to stimulate angiogenesis, cellular adhesion and the formation of extracellular matrix, promoting the replacement and regeneration of tissue. A dermatological delivery system comprising a topically acceptable, inert support impregnated with a metronidazole (MET) solution was developed. MET 2-(2- methyl- 5-nitro- 1H- imidazol- 1-yl) ethanol, has been widely used for the treatment of protozoa and anaerobic bacterial infections. MET is a nitroimidazole anti-infective medication used mainly in the treatment of infections caused by susceptible organisms, particularly anaerobic bacteria and protozoa. In a previous study, we have tested NRL as an occlusive membrane for GBR with promising results. One possible way to decrease the inflammatory process, it was incorporated the MET in NRL. MET was incorporated into the NRL, by mixing it in solution for in vitro protein delivery experiments. The solutions of latex and MET were polymerized at different temperatures, from -100 to 40 °C, in order to control the membrane morphology. SEM microscopy analysis showed that the number, size and distribution of pores in NRL membranes varied depending on polymerization temperature, as well as its overall morphology. Results demonstrated that the best drug-delivery system was the membrane polymerized at -100 °C, which does release 77,1% of its MET content for up 310 hours.
Resumo:
Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.
Resumo:
In this work, we consider the numerical solution of a large eigenvalue problem resulting from a finite rank discretization of an integral operator. We are interested in computing a few eigenpairs, with an iterative method, so a matrix representation that allows for fast matrix-vector products is required. Hierarchical matrices are appropriate for this setting, and also provide cheap LU decompositions required in the spectral transformation technique. We illustrate the use of freely available software tools to address the problem, in particular SLEPc for the eigensolvers and HLib for the construction of H-matrices. The numerical tests are performed using an astrophysics application. Results show the benefits of the data-sparse representation compared to standard storage schemes, in terms of computational cost as well as memory requirements.
Resumo:
Tribimaximal leptonic mixing is a mass-independent mixing scheme consistent with the present solar and atmospheric neutrino data. By conveniently decomposing the effective neutrino mass matrix associated to it, we derive generic predictions in terms of the parameters governing the neutrino masses. We extend this phenomenological analysis to other mass-independent mixing schemes which are related to the tribimaximal form by a unitary transformation. We classify models that produce tribimaximal leptonic mixing through the group structure of their family symmetries in order to point out that there is often a direct connection between the group structure and the phenomenological analysis. The type of seesaw mechanism responsible for neutrino masses plays a role here, as it restricts the choices of family representations and affects the viability of leptogenesis. We also present a recipe to generalize a given tribimaximal model to an associated model with a different mass-independent mixing scheme, which preserves the connection between the group structure and phenomenology as in the original model. This procedure is explicitly illustrated by constructing toy models with the transpose tribimaximal, bimaximal, golden ratio, and hexagonal leptonic mixing patterns.
Resumo:
We produce five flavour models for the lepton sector. All five models fit perfectly well - at the 1 sigma level - the existing data on the neutrino mass-squared differences and on the lepton mixing angles. The models are based on the type I seesaw mechanism, on a Z(2) symmetry for each lepton flavour, and either on a (spontaneously broken) symmetry under the interchange of two lepton flavours or on a (spontaneously broken) CP symmetry incorporating that interchange - or on both symmetries simultaneously. Each model makes definite predictions both for the scale of the neutrino masses and for the phase delta in lepton mixing; the fifth model also predicts a correlation between the lepton mixing angles theta(12) and theta(23).
Resumo:
We suggest that the weak-basis independent condition det(M-nu) = 0 for the effective neutrino mass matrix can be used in order to remove the ambiguities in the reconstruction of the neutrino mass matrix from input data available from present and future feasible experiments. In this framework, we study the full reconstruction of M-nu with special emphasis on the correlation between the Majorana CP-violating phase and the various mixing angles. The impact of the recent KamLAND results on the effective neutrino mass parameter is also briefly discussed. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
In several industrial applications, highly complex behaviour materials are used together with intricate mixing processes, which difficult the achievement of the desired properties for the produced materials. This is the case of the well-known dispersion of nano-sized fillers in a melt polymer matrix, used to improve the nanocomposite mechanical and/or electrical properties. This mixing is usually performed in twin-screw extruders, that promote complex flow patterns, and, since an in loco analysis of the material evolution and mixing is difficult to perform, numerical tools can be very useful to predict the evolution and behaviour of the material. This work presents a numerical based study to improve the understanding of mixing processes. Initial numerical studies were performed with generalized Newtonian fluids, but, due to the null relaxation time that characterize this type of fluids, the assumption of viscoelastic behavior was required. Therefore, the polymer melt was rheologically characterized, and, a six mode Phan-Thien-Tanner and Giesekus models were used to fit the rheological data. These viscoelastic rheological models were used to model the process. The conclusions obtained in this work provide additional and useful data to correlate the type and intensity of the deformation history promoted to the polymer nanocomposite and the quality of the mixing obtained.
Resumo:
Miralls deformables més i més grans, amb cada cop més actuadors estan sent utilitzats actualment en aplicacions d'òptica adaptativa. El control dels miralls amb centenars d'actuadors és un tema de gran interès, ja que les tècniques de control clàssiques basades en la seudoinversa de la matriu de control del sistema es tornen massa lentes quan es tracta de matrius de dimensions tan grans. En aquesta tesi doctoral es proposa un mètode per l'acceleració i la paral.lelitzacó dels algoritmes de control d'aquests miralls, a través de l'aplicació d'una tècnica de control basada en la reducció a zero del components més petits de la matriu de control (sparsification), seguida de l'optimització de l'ordenació dels accionadors de comandament atenent d'acord a la forma de la matriu, i finalment de la seva posterior divisió en petits blocs tridiagonals. Aquests blocs són molt més petits i més fàcils de fer servir en els càlculs, el que permet velocitats de càlcul molt superiors per l'eliminació dels components nuls en la matriu de control. A més, aquest enfocament permet la paral.lelització del càlcul, donant una com0onent de velocitat addicional al sistema. Fins i tot sense paral. lelització, s'ha obtingut un augment de gairebé un 40% de la velocitat de convergència dels miralls amb només 37 actuadors, mitjançant la tècnica proposada. Per validar això, s'ha implementat un muntatge experimental nou complet , que inclou un modulador de fase programable per a la generació de turbulència mitjançant pantalles de fase, i s'ha desenvolupat un model complert del bucle de control per investigar el rendiment de l'algorisme proposat. Els resultats, tant en la simulació com experimentalment, mostren l'equivalència total en els valors de desviació després de la compensació dels diferents tipus d'aberracions per als diferents algoritmes utilitzats, encara que el mètode proposat aquí permet una càrrega computacional molt menor. El procediment s'espera que sigui molt exitós quan s'aplica a miralls molt grans.
Resumo:
Nowadays problem of solving sparse linear systems over the field GF(2) remain as a challenge. The popular approach is to improve existing methods such as the block Lanczos method (the Montgomery method) and the Wiedemann-Coppersmith method. Both these methods are considered in the thesis in details: there are their modifications and computational estimation for each process. It demonstrates the most complicated parts of these methods and gives the idea how to improve computations in software point of view. The research provides the implementation of accelerated binary matrix operations computer library which helps to make the progress steps in the Montgomery and in the Wiedemann-Coppersmith methods faster.
Resumo:
On étudie l’application des algorithmes de décomposition matricielles tel que la Factorisation Matricielle Non-négative (FMN), aux représentations fréquentielles de signaux audio musicaux. Ces algorithmes, dirigés par une fonction d’erreur de reconstruction, apprennent un ensemble de fonctions de base et un ensemble de coef- ficients correspondants qui approximent le signal d’entrée. On compare l’utilisation de trois fonctions d’erreur de reconstruction quand la FMN est appliquée à des gammes monophoniques et harmonisées: moindre carré, divergence Kullback-Leibler, et une mesure de divergence dépendente de la phase, introduite récemment. Des nouvelles méthodes pour interpréter les décompositions résultantes sont présentées et sont comparées aux méthodes utilisées précédemment qui nécessitent des connaissances du domaine acoustique. Finalement, on analyse la capacité de généralisation des fonctions de bases apprises par rapport à trois paramètres musicaux: l’amplitude, la durée et le type d’instrument. Pour ce faire, on introduit deux algorithmes d’étiquetage des fonctions de bases qui performent mieux que l’approche précédente dans la majorité de nos tests, la tâche d’instrument avec audio monophonique étant la seule exception importante.
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.
Resumo:
Mineral dust is an important aerosol species in the Earth’s atmosphere and has a major source within North Africa, of which the Sahara forms the major part. Aerosol Time of Flight Mass Spectrometry (ATOFMS) is first used to determine the mixing state of dust particles collected from the land surface in the Saharan region, showing low abundance of species such as nitrate and sulphate internally mixed with the dust mineral matrix. These data are then compared with the ATOFMS single particle mass spectra of Saharan dust particles detected in the marine atmosphere in the vicinity of the Cape Verde islands, which are further compared with those from particles with longer atmospheric residence sampled at a coastal station at Mace Head, Ireland. Saharan dust particles collected near the Cape Verde Islands showed increased internally mixed nitrate but no sulphate, whilst Saharan dust particles collected on the coast of Ireland showed a very high degree of internally mixed secondary species including nitrate, sulphate and methanesulphonate. This uptake of secondary species will change the pH and hygroscopic properties of the aerosol dust and thus can influence the budgets of other reactive gases, as well as influencing the radiative properties of the particles and the availability of metals for dissolution.
Resumo:
The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society