953 resultados para Phenotypic Covariance Matrices


Relevância:

20.00% 20.00%

Publicador:

Resumo:

En este libro, en su primera parte, es estudia e identifica la estructura de una ecuación lineal, y su gráfica en dos y tres variables. En la segunda parte del contenido se expone la teoría del álgebra matricial, que posteriormente será utilizada en la tercera parte para resolver sistemas de ecuaciones lineales; en este sentido, el lector podrá darse cuenta de las bondades de los métodos que se desarrollarán y las aplicaciones que se pueden plantear y resolver con los métodos gaussianos desarrollados en el álgebra lineal. Igualmente, se presentará el uso del MATLAB para la solución de sistemas de ecuaciones lineales y operaciones con matrices, empleando el computador.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Como la historia lo viene diciendo, en general los resultados importantes y trascendentales en Matemática son los capaces de vincular dos estructuras, en su esencia, totalmente distintas. En el año 1973, el matemático Noruego Marius Sophus Lie (1849-1925) estudiando propiedades de soluciones de sistemas de ecuaciones diferenciales, dio origen a las ideas que conformaron la hoy denominada Teoría de Lie, la cual plantea la relación entre geometría, álgebra y la topología, este matemático creó en gran parte la teoría de la simetría continua, y la aplicó al estudio de la geometría y las ecuaciones diferenciales. Con aportes posteriores de los matemáticos Weyl, Cartan, Chevalley, Killing, Harish Chandra y otros estructuran la teoría de Lie, se presentan en este trabajo de investigación las nociones básicas que subyacen en dicha teoría. En los primeros trabajos de Sophus Lie, la idea subyacente era construir una teoría de grupos continuos, que complementara la ya existente teoría de grupos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diversity among individuals in a population is an important feature linking vital rates with behaviour and spatial occupation. We measured the growth increments in the otolith of individual fishes collected on the annual fisheries survey PELGAS from 2001 to 2015. Individuals who grew larger at juvenile stage occupied later in life more off-shore habitats. Further, we analysed the allozymes of 13 different loci from 2001 to 2006. Alleles of the enzyme IDH showed different frequencies in inshore and offshore habitats. The population spatially segregates along a coast to off-shore gradient with individuals showing different early growth and allele frequencies. Results show how individuals in a population segregate spatially in different habitats in relation with phenotypic diversity. This implies modelling the population with individual-based and physiological approaches to fully grasp its dynamics. It also implies developing management strategies to conserve infra-population diversity as a means to garantee the occupation of the full range of habitats.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En este libro, en su primera parte, es estudia e identifica la estructura de una ecuación lineal, y su gráfica en dos y tres variables. En la segunda parte del contenido se expone la teoría del álgebra matricial, que posteriormente será utilizada en la tercera parte para resolver sistemas de ecuaciones lineales; en este sentido, el lector podrá darse cuenta de las bondades de los métodos que se desarrollarán y las aplicaciones que se pueden plantear y resolver con los métodos gaussianos desarrollados en el álgebra lineal. Igualmente, se presentará el uso del MATLAB para la solución de sistemas de ecuaciones lineales y operaciones con matrices, empleando el computador. En la segunda parte del contenido se expone la teoría del álgebra matricial, que posteriormente será utilizada en la tercera parte para resolver sistemas de ecuaciones lineales; en este sentido, el lector podrá darse cuenta de las bondades de los métodos que se desarrollarán y las aplicaciones que se pueden plantear y resolver con los métodos gaussianos desarrollados en el álgebra lineal. Igualmente, se presentará el uso del MATLAB para la solución de sistemas de ecuaciones lineales y operaciones con matrices, empleando el computador.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Toxoplasma gondii is the causative protozoan agent of toxoplasmosis, which is a common infection that is widely distributed worldwide. Studies revealed stronger clonal strains in North America and Europe and genetic diversity in South American strains. Our study aimed to differentiate the pathogenicity and sulfadiazine resistance of three T. gondii isolates obtained from livestock intended for human consumption. The cytopathic effects of the T. gondii isolates were evaluated. The pathogenicity was determined by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) using a CS3 marker and in a rodent model in vivo. Phenotypic sulfadiazine resistance was measured using a kinetic curve of drug activity in Swiss mice. IgM and IgG were measured by ELISA, and the dihydropteroate synthase (DHPS) gene sequence was analysed. The cytopathic effects and the PCR-RFLP profiles from chickens indicated a different infection source. The Ck3 isolate displayed more cytopathic effects in vitro than the Ck2 and ME49 strains. Additionally, the Ck2 isolate induced a differential humoral immune response compared to ME49. The Ck3 and Pg1 isolates, but not the Ck2 isolate, showed sulfadiazine resistance in the sensitivity assay. We did not find any DHPS gene polymorphisms in the mouse samples. These atypical pathogenicity and sulfadiazine resistance profiles were not previously reported and served as a warning to local health authorities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to identify sorghum hybrids that have both high yield and phenotypic stability in Brazilian environments. Seven trials were conducted between February and March 2011. The experimental design was a randomized complete block with 25 treatments and three replicates...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aimed to perform phenotypic and molecular characterization of cultivars and breeding lines of common bean for resistance to anthracnose.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we study a version of the general question of how well a Haar-distributed orthogonal matrix can be approximated by a random Gaussian matrix. Here, we consider a Gaussian random matrix (Formula presented.) of order n and apply to it the Gram–Schmidt orthonormalization procedure by columns to obtain a Haar-distributed orthogonal matrix (Formula presented.). If (Formula presented.) denotes the vector formed by the first m-coordinates of the ith row of (Formula presented.) and (Formula presented.), our main result shows that the Euclidean norm of (Formula presented.) converges exponentially fast to (Formula presented.), up to negligible terms. To show the extent of this result, we use it to study the convergence of the supremum norm (Formula presented.) and we find a coupling that improves by a factor (Formula presented.) the recently proved best known upper bound on (Formula presented.). Our main result also has applications in Quantum Information Theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this thesis is to explore new and improved methods for greater sample introduction efficiency and enhanced analytical performance with inductively coupled plasma optical emission spectrometry (ICP-OES). Three projects are discussed in which the capabilities and applications of ICP-OES are expanded: 1. In the first project, a conventional ultrasonic nebuliser was modified to replace the heater/condenser with an infrared heated pre-evaporation tube. In continuation from previous works with pre-evaporation, the current work investigated the effects of heating with infrared block and rope heaters on two different ICP-OES instruments. Comparisons were made between several methods and setups in which temperatures were varied. By monitoring changes to sensitivity, detection limit, precision, and robustness, and analyzing two certified reference materials, a method with improved sample introduction efficiency and comparable analytical performance to a previous method was established. 2. The second project involved improvements to a previous work in which a multimode sample introduction system (MSIS) was modified by inserting a pre-evaporation tube between the MSIS and torch. The new work focused on applying an infrared heated ceramic rope for pre-evaporation. This research was conducted in all three MSIS modes (nebulisation mode, hydride generation mode, and dual mode) and on two different ICP-OES instruments, and comparisons were made between conventional setups in terms of sensitivity, detection limit, precision, and robustness. By tracking both hydride-forming and non-hydride forming elements, the effects of heating in combination with hydride generation were probed. Finally, optimal methods were validated by analysis of two certified reference materials. 3. A final project was completed in collaboration with ZincNyx Energy Solutions. This project sought to develop a method for the overall analysis of a 12 M KOH zincate fuel, which is used in green energy backup systems. By employing various techniques including flow injection analysis and standard additions, a final procedure was formulated for the verification of K concentration, as well as the measurement of additives (Al, Fe, Mg, In, Si), corrosion products (such C from CO₃²¯), and Zn particles both in and filtered from solution. Furthermore, the effects of exposing the potassium zincate electrolyte fuel to air were assessed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La présence des contaminants organiques dans l’environnement est une problématique aux enjeux aussi bien scientifiques que politiques. Le caractère diffus et continu (différentes et multiples sources) de cette contamination ne permet pas à ces molécules biologiquement actives d’être soumises à une législation. Ces molécules, pouvant être très récalcitrantes, ne sont pas systématiquement éliminées par les systèmes de traitement des eaux conventionnels. Actuellement, de nouveaux procédés biotechnologiques basés sur des enzymes extracellulaires (e.g. Laccase) ou des champignons lignivores permettent l’élimination des composés les plus récalcitrants. Notre compréhension des mécanismes impliqués dans cette élimination reste incomplète. En effet, la biosorption et l’activité des enzymes extracellulaire sont les mécanismes les plus souvent mis en avant pour expliquer l’efficacité des procédés d’élimination fongique, mais ne sont pas capables d’expliquer les performances obtenues pour certains composés pharmaceutiques. Ces lacunes dans nos connaissances sur les mécanismes responsables de l’élimination fongique des contaminants organiques sont un frein à la pleine exploitation de ces procédés de traitement. De plus, il est forcé d’admettre qu’un grand nombre de travaux portant sur l’élimination fongique de contaminants organiques ont été réalisés dans des conditions de hautes concentrations, qui peuvent être peu représentatives des matrices environnementales. Ainsi, les effets observés à plus forte concentration peuvent etre le résultat dû au stress de l’organisme au contact des contaminants (toxicités). Cette thèse adresse deux questions ; ainsi quelle est l’influence des concentrations traces sur de tels procédés ? Et comment expliquer l’élimination de certains contaminants organiques lors des traitements fongiques ? Afin d’apporter des éléments de réponse sur les mécanismes mis en jeux lors de l’élimination fongique, les travaux présentés ici ont été réalisés sur un modèle de champignon lignivore connu pour ses propriétés en bioremediation. Dans un premier temps, un développement analytique permettant la quantification d’une sélection de contaminants organiques à l’état de traces a été réalisé. Cette méthode a permis d’effectuer des analyses de ces molécules à partir d’un seul échantillon environnemental de faible biomasse et à partir d’une seule injection instrumentale. Les résultats de cette thèse démontrent que l’élimination fongique de contaminants organiques résulte de mécanismes plus complexes que précédemment décrits. Notamment, la dégradation est fortement dépendante d’une étape initiale d’internalisation du contaminant par l’organisme ciblé et de la dégradation intracellulaire. Les mécanismes impliqués peuvent ainsi donnés lieux à des réactions de conjugaison intracellulaire des molecules (glucuronide, glutathione). Les résultats démontrent également que ces procédés d’élimination fongique sont efficaces sur une large gamme de concentration en contaminants organiques. Cependant, les faibles concentrations modifient les propriétés physico-chimiques et biologiques de l’organisme testé (i.e. un changement de la morphologie et du profil de la production enzymatique). La réponse biologique n’étant pas directement proportionnelle a l’exposition en contaminant. Cette étude a permis d’accroitre notre compréhension des mécanismes impliqués dans la dégradation fongique de contaminants organiques. Ceci ouvre la voie à de nouvelles études portant sur les interactions entre processus intra — et extracellulaires. Cette thèse contribue également à l’amélioration des connaissances en offrant des outils de compréhension nécessaire à l’optimisation et au développement du potentiel de ces procédés biotechnologiques (ciblage et role des enzymes réeellement impliquées dans les réactions de biocatalyse).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé : Les performances de détecteurs à scintillation, composés d’un cristal scintillateur couplé à un photodétecteur, dépendent de façon critique de l’efficacité de la collecte et de l’extraction des photons de scintillation du cristal vers le capteur. Dans les systèmes d’imagerie hautement pixellisés (e.g. TEP, TDM), les scintillateurs doivent être arrangés en matrices compactes avec des facteurs de forme défavorables pour le transport des photons, au détriment des performances du détecteur. Le but du projet est d’optimiser les performances de ces détecteurs pixels par l'identification des sources de pertes de lumière liées aux caractéristiques spectrales, spatiales et angulaires des photons de scintillation incidents sur les faces des scintillateurs. De telles informations acquises par simulation Monte Carlo permettent une pondération adéquate pour l'évaluation de gains atteignables par des méthodes de structuration du scintillateur visant à une extraction de lumière améliorée vers le photodétecteur. Un plan factoriel a permis d'évaluer la magnitude de paramètres affectant la collecte de lumière, notamment l'absorption des matériaux adhésifs assurant l'intégrité matricielle des cristaux ainsi que la performance optique de réflecteurs, tous deux ayant un impact considérable sur le rendement lumineux. D'ailleurs, un réflecteur abondamment utilisé en raison de ses performances optiques exceptionnelles a été caractérisé dans des conditions davantage réalistes par rapport à une immersion dans l'air, où sa réflectivité est toujours rapportée. Une importante perte de réflectivité lorsqu'il est inséré au sein de matrices de scintillateurs a été mise en évidence par simulations puis confirmée expérimentalement. Ceci explique donc les hauts taux de diaphonie observés en plus d'ouvrir la voie à des méthodes d'assemblage en matrices limitant ou tirant profit, selon les applications, de cette transparence insoupçonnée.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The deep-sea lantern shark Etmopterus spinax occurs in the northeast Atlantic on or near the bottoms of the outer continental shelves and slopes, and is regularly captured as bycatch in deep-water commercial fisheries. Given the lack of knowledge on the impacts of fisheries on this species, a demographic analysis using age-based Leslie matrices was carried out. Given the uncertainties in the mortality estimates and in the available life history parameters, several different scenarios, some incorporating stochasticity in the life history parameters (using Monte Carlo simulation), were analyzed. If only natural mortality were considered, even after introducing uncertainties in all parameters, the estimated population growth rate (A) suggested an increasing population. However, if fishing mortality from trawl fisheries is considered, the estimates of A either indicated increasing or declining populations. In these latter cases, the uncertainties in the species reproductive cycle seemed to be particularly relevant, as a 2-year reproductive cycle indicated a stable population, while a longer (3-year cycle) indicated a declining population. The estimated matrix elasticities were in general higher for the survivorship parameters of the younger age classes and tended to decrease for the older ages. This highlights the susceptibility of this deep-sea squaloid to increasing fishing mortality, emphasizing that even though this is a small-sized species, it shows population dynamics patterns more typical of the larger-sized and in general more vulnerable species. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.