952 resultados para Legendre polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: We address the problem of studying recombinational variations in (human) populations. In this paper, our focus is on one computational aspect of the general task: Given two networks G1 and G2, with both mutation and recombination events, defined on overlapping sets of extant units the objective is to compute a consensus network G3 with minimum number of additional recombinations. We describe a polynomial time algorithm with a guarantee that the number of computed new recombination events is within ϵ = sz(G1, G2) (function sz is a well-behaved function of the sizes and topologies of G1 and G2) of the optimal number of recombinations. To date, this is the best known result for a network consensus problem.Results: Although the network consensus problem can be applied to a variety of domains, here we focus on structure of human populations. With our preliminary analysis on a segment of the human Chromosome X data we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. These results have been verified independently using traditional manual procedures. To the best of our knowledge, this is the first recombinations-based characterization of human populations. Conclusion: We show that our mathematical model identifies recombination spots in the individual haplotypes; the aggregate of these spots over a set of haplotypes defines a recombinational landscape that has enough signal to detect continental as well as population divide based on a short segment of Chromosome X. In particular, we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. The agreement with mutation-based analysis can be viewed as an indirect validation of our results and the model. Since the model in principle gives us more information embedded in the networks, in our future work, we plan to investigate more non-traditional questions via these structures computed by our methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The properties and cosmological importance of a class of non-topological solitons, Q-balls, are studied. Aspects of Q-ball solutions and Q-ball cosmology discussed in the literature are reviewed. Q-balls are particularly considered in the Minimal Supersymmetric Standard Model with supersymmetry broken by a hidden sector mechanism mediated by either gravity or gauge interactions. Q-ball profiles, charge-energy relations and evaporation rates for realistic Q-ball profiles are calculated for general polynomial potentials and for the gravity mediated scenario. In all of the cases, the evaporation rates are found to increase with decreasing charge. Q-ball collisions are studied by numerical means in the two supersymmetry breaking scenarios. It is noted that the collision processes can be divided into three types: fusion, charge transfer and elastic scattering. Cross-sections are calculated for the different types of processes in the different scenarios. The formation of Q-balls from the fragmentation of the Aflieck-Dine -condensate is studied by numerical and analytical means. The charge distribution is found to depend strongly on the initial energy-charge ratio of the condensate. The final state is typically noted to consist of Q- and anti-Q-balls in a state of maximum entropy. By studying the relaxation of excited Q-balls the rate at which excess energy can be emitted is calculated in the gravity mediated scenario. The Q-ball is also found to withstand excess energy well without significant charge loss. The possible cosmological consequences of these Q-ball properties are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To assess the level of hemoglobin-Hb during pregnancy before and after fortification of flours with iron. Method: A cross-sectional study with data from 12,119 pregnant women attended at a public prenatal from five macro regions of Brazil. The sample was divided into two groups: Before-fortification (birth before June/2004) and After-fortification (last menstruation after June/2005). Hb curves were compared with national and international references. Polynomial regression models were built, with a significance level of 5%. Results: Although the higher levels of Hb in all gestational months after-fortification, the polynomial regression did not show the fortification effect (p=0.3). Curves in the two groups were above the references in the first trimester, with following decrease and stabilization at the end of pregnancy. Conclusion: Although the fortification effect was not confirmed, the study presents variation of Hb levels during pregnancy, which is important for assistencial practice and evaluation of public policies.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Different studies have shown circadian variation of ischemic burden among patients with ST-Elevation Myocardial Infarction (STEMI), but with controversial results. The aim of this study was to analyze circadian variation of myocardial infarction size and in-hospital mortality in a large multicenter registry. METHODS: This retrospective, registry-based study was based on data from AMIS Plus, a large multicenter Swiss registry of patients who suffered myocardial infarction between 1999 and 2013. Peak creatine kinase (CK) was used as a proxy measure for myocardial infarction size. Associations between peak CK, in-hospital mortality, and the time of day at symptom onset were modelled using polynomial-harmonic regression methods. RESULTS: 6,223 STEMI patients were admitted to 82 acute-care hospitals in Switzerland and treated with primary angioplasty within six hours of symptom onset. Only the 24-hour harmonic was significantly associated with peak CK (p = 0.0001). The maximum average peak CK value (2,315 U/L) was for patients with symptom onset at 23:00, whereas the minimum average (2,017 U/L) was for onset at 11:00. The amplitude of variation was 298 U/L. In addition, no correlation was observed between ischemic time and circadian peak CK variation. Of the 6,223 patients, 223 (3.58%) died during index hospitalization. Remarkably, only the 24-hour harmonic was significantly associated with in-hospital mortality. The risk of death from STEMI was highest for patients with symptom onset at 00:00 and lowest for those with onset at 12:00. DISCUSSION: As a part of this first large study of STEMI patients treated with primary angioplasty in Swiss hospitals, investigations confirmed a circadian pattern to both peak CK and in-hospital mortality which were independent of total ischemic time. Accordingly, this study proposes that symptom onset time be incorporated as a prognosis factor in patients with myocardial infarction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One aspect of person-job fit reflects congruence between personal preferences and job design; as congruence increases so should satisfaction. We hypothesized that power distance would moderate whether fit is related to satisfaction with degree of job formalization. We obtained measures of job-formalization, fit and satisfaction, as well as organizational commitment from employees (n = 772) in a multinational firm with subsidiaries in six countries. Confirming previous findings, individuals from low power-distance cultures were most satisfied with increasing fit. However, the extent to which individuals from high power-distance cultures were satisfied did not necessarily depend on increasing fit, but mostly on whether the degree of formalization received was congruent to cultural norms. Irrespective of culture, satisfaction with formalization predicted a broad measure of organizational commitment. Apart from our novel extension of fit theory, we show how moderation can be tested in the context of polynomial response surface regression and how specific hypotheses can be tested regarding different points on the response surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyses the robustness of Least-Squares Monte Carlo, a techniquerecently proposed by Longstaff and Schwartz (2001) for pricing Americanoptions. This method is based on least-squares regressions in which theexplanatory variables are certain polynomial functions. We analyze theimpact of different basis functions on option prices. Numerical resultsfor American put options provide evidence that a) this approach is veryrobust to the choice of different alternative polynomials and b) few basisfunctions are required. However, these conclusions are not reached whenanalyzing more complex derivatives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In pediatric echocardiography, cardiac dimensions are often normalized for weight, height, or body surface area (BSA). The combined influence of height and weight on cardiac size is complex and likely varies with age. We hypothesized that increasing weight for height, as represented by body mass index (BMI) adjusted for age, is poorly accounted for in Z scores normalized for weight, height, or BSA. We aimed to evaluate whether a bias related to BMI was introduced when proximal aorta diameter Z scores are derived from bivariate models (only one normalizing variable), and whether such a bias was reduced when multivariable models are used. We analyzed 1,422 echocardiograms read as normal in children ≤18 years. We computed Z scores of the proximal aorta using allometric, polynomial, and multivariable models with four body size variables. We then assessed the level of residual association of Z scores and BMI adjusted for age and sex. In children ≥6 years, we found a significant residual linear association with BMI-for-age and Z scores for most regression models. Only a multivariable model including weight and height as independent predictors produced a Z score free of linear association with BMI. We concluded that a bias related to BMI was present in Z scores of proximal aorta diameter when normalization was done using bivariate models, regardless of the regression model or the normalizing variable. The use of multivariable models with weight and height as independent predictors should be explored to reduce this potential pitfall when pediatric echocardiography reference values are evaluated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Se estudiaron muestras de Macrozoobentos (> 1 mm) tomadas a diferentes profundidades en enero de 1987 por el BIC Humboldt de la plataforma continental de dos áreas del norte del litoral peruano, una frente a zorritos (3º35.5`- 3º51.3`) y otra entre Isla Lobos de Tierra y Pimentel (6º32.0`- 6º55.5`), de distinto tipo de sedimento en el fondo. Las muestras estudiadas, así como los datos de Temperatura, Oxigeno disuelto y Salinidad en el fondo, fueron proporcionadas por el Programa Cooperativo Peruano-Alemán de Investigación Pesquera (PROCOPA) y el Instituto del Mar del Perú (IMARPE). El trabajo consiste en identificar, cuantificar y comparar las taxa presentes en las muestras de ambas áreas, determinar la Biomasa, Densidad, Diversidad, Equidad y Riqueza, y analizar el tipo y característica del sedimento de las estaciones. Para el estudio del aspecto comunitario se emplearon el análisis de agrupamiento por racimos (“cluster análisis”; Legendre y Legendre, 1983) y la técnica de los Índices Biológicos (Guille, 1970). En el análisis estadístico se usaron preferentemente la prueba no paramétrica de dos grupos de Kolmogorov-Smirnov y correlaciones lineales entre los factores abiòticos, características del sedimento y parámetros biológicos hallados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identifying adaptive genetic variation is a challenging task, in particular in non-model species for which genomic information is still limited or absent. Here, we studied distribution patterns of amplified fragment length polymorphisms (AFLPs) in response to environmental variation, in 13 alpine plant species consistently sampled across the entire European Alps. Multiple linear regressions were performed between AFLP allele frequencies per site as dependent variables and two categories of independent variables, namely Moran's eigenvector map MEM variables (to account for spatial and unaccounted environmental variation, and historical demographic processes) and environmental variables. These associations allowed the identification of 153 loci of ecological relevance. Univariate regressions between allele frequency and each environmental factor further showed that loci of ecological relevance were mainly correlated with MEM variables. We found that precipitation and temperature were the best environmental predictors, whereas topographic factors were rarely involved in environmental associations. Climatic factors, subject to rapid variation as a result of the current global warming, are known to strongly influence the fate of alpine plants. Our study shows, for the first time for a large number of species, that the same environmental variables are drivers of plant adaptation at the scale of a whole biome, here the European Alps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Growth of four variables of the femur (diapyseal length, diaphyseal length plus distal epiphysis, maximum length and vertical diameter of the head) was analyzed by polynomial regression for the purpose of evaluating its significance and capacity for age and sex determination throughout the entire life continuum. Materials included in analysis consisted of 346 specimens ranging from birth to 97 years of age from five documented osteological collections of Western European descent. Linear growth was displayed by each of the four variables. Significant sexual dimorphism was identified in two of the femoral measurements, including maximum length and vertical diameter of the head, from age 15 onward. These results indicate that the two variables may be of use in the determination of sex in sex determination from that age onward. Strong correlation coefficients were identified between femoral size and age for each of the four metric variables. These results indicate that any of the femoral measurements is likely to serve as a useful source to estimate sub-adult age in both archaeological and forensic samples.