987 resultados para LIKELIHOOD METHODS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Competing hypotheses seek to explain the evolution of oxygenic and anoxygenic processes of photosynthesis. Since chlorophyll is less reduced and precedes bacteriochlorophyll on the modern biosynthetic pathway, it has been proposed that chlorophyll preceded bacteriochlorophyll in its evolution. However, recent analyses of nucleotide sequences that encode chlorophyll and bacteriochlorophyll biosynthetic enzymes appear to provide support for an alternative hypothesis. This is that the evolution of bacteriochlorophyll occurred earlier than the evolution of chlorophyll. Here we demonstrate that the presence of invariant sites in sequence datasets leads to inconsistency in tree building (including maximum-likelihood methods). Homologous sequences with different biological functions often share invariant sites at the same nucleotide positions. However, different constraints can also result in additional invariant sites unique to the genes, which have specific and different biological functions. Consequently, the distribution of these sites can be uneven between the different types of homologous genes. The presence of invariant sites, shared by related biosynthetic genes as well as those unique to only some of these genes, has misled the recent evolutionary analysis of oxygenic and anoxygenic photosynthetic pigments. We evaluate an alternative scheme for the evolution of chlorophyll and bacteriochlorophyll.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The wide range of morphological variations in the “loxurina group” makes taxa identification difficult, and despite several reviews, serious taxonomical confusion remains. We make use of DNA data in conjunction with morphological appearance and available information on species distribution to delimit the boundaries of the “loxurina” group species previously established based on morphology. A fragment of 635 base pairs within the mtDNA gene cytochrome oxidase I (COI) was analysed for seven species of the “loxurina group”. Phylogenetic relationships among the included taxa were inferred using maximum parsimony and maximum likelihood methods. Penaincisalia sigsiga (Bálint et al), P. cillutincarae (Draudt), P. atymna (Hewitson) and P. loxurina (C. Felder & R. Felder) were easily delimited as the morphological, geographic and molecular data were congruent. Penaincisalia ludovica (Bálint & Wojtusiak) and P. loxurina astillero (Johnson) represent the same entity and constitute a sub-species of P. loxurina. However, incongruence among morphological, genetic, and geographic data is shown in P. chachapoya (Bálint & Wojtusiak) and P. tegulina (Bálint et al). Our results highlight that an integrative approach is needed to clarify the taxonomy of these neotropical taxa, but more genetic and geographical studies are still required.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los métodos de máxima verosimilitud (MMV) ofrecen un marco alternativo a la estadística frecuentista convencional, alejándose del uso del p-valor para el rechazo de una única hipótesis nula y optando por el uso de las verosimilitudes para evaluar el grado de apoyo en los datos a un conjunto de hipótesis alternativas (o modelos) de interés para el investigador. Estos métodos han sido ampliamente aplicados en ecología en el marco de los modelos de vecindad. Dichos modelos usan una aproximación espacialmente explícita para describir procesos demográficos de plantas o procesos ecosistémicos en función de los atributos de los individuos vecinos. Se trata por tanto de modelos fenomenológicos cuya principal utilidad radica en funcionar como herramientas de síntesis de los múltiples mecanismos por los que las especies pueden interactuar e influenciar su entorno, proporcionando una medida del efecto per cápita de individuos de distintas características (ej. tamaño, especie, rasgos fisiológicos) sobre los procesos de interés. La gran ventaja de aplicar los MMV en el marco de los modelos de vecindad es que permite ajustar y comparar múltiples modelos que usen distintos atributos de los vecinos y/o formas funcionales para seleccionar aquel con mayor soporte empírico. De esta manera, cada modelo funcionará como un “experimento virtual” para responder preguntas relacionadas con la magnitud y extensión espacial de los efectos de distintas especies coexistentes, y extraer conclusiones sobre posibles implicaciones para el funcionamiento de comunidades y ecosistemas. Este trabajo sintetiza las técnicas de implementación de los MMV y los modelos de vecindad en ecología terrestre, resumiendo su uso hasta la fecha y destacando nuevas líneas de aplicación.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bioenergetics differ between males and females of many species. Human females apportion a substantial proportion of energy resources towards gynoid fat storage, to support the energetic burden of reproduction. Similarly, axial calcium accrual is favoured in females compared with males. Nutritional status is a prognostic indicator in cystic fibrosis (CF), but girls and young women are at greater risk of death despite equivalent nutritional status to males. The aim of this study was to compare fat (energy) and calcium stores (bone density) in males and females with CF over a spectrum of disease severity. Methods: Fat as % body weight (fat%) and lumbar spine (LS) and total body (TB) bone mineral density (BMD) were measured using dual absorption X-ray photometry in 127(59M) control and 101(54M) CF subjects, aged 9–25 years. An equation for predicted age at death had been determined using survival data and history of pulmonary function for the whole clinic, based on a trivariate normal model using maximum likelihood methods (1). For the CF group, a disease severity index (predicted age at death) was calculated from the derived equations according to each subjects history of pulmonary function, current age, and gender. Disease severity was classified according to percentile of predicted age at death (‘mild’ ≥75th, ‘moderate’ 25th–75th, ‘severe’ ≤25th percentile). Wt for age z-score was calculated. Serum testosterone and oestrogen were measured in males and females respectively. Fat% and LSBMD were compared between the groups using ANOVA. Results: There was an interaction between disease severity and gender: increasing disease severity was associated with greater deficits in TB (p=0.01), LSBMD (p

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La vallée du fleuve Saint-Laurent, dans l’est du Canada, est l’une des régions sismiques les plus actives dans l’est de l’Amérique du Nord et est caractérisée par de nombreux tremblements de terre intraplaques. Après la rotation rigide de la plaque tectonique, l’ajustement isostatique glaciaire est de loin la plus grande source de signal géophysique dans l’est du Canada. Les déformations et les vitesses de déformation de la croûte terrestre de cette région ont été étudiées en utilisant plus de 14 ans d’observations (9 ans en moyenne) de 112 stations GPS fonctionnant en continu. Le champ de vitesse a été obtenu à partir de séries temporelles de coordonnées GPS quotidiennes nettoyées en appliquant un modèle combiné utilisant une pondération par moindres carrés. Les vitesses ont été estimées avec des modèles de bruit qui incluent les corrélations temporelles des séries temporelles des coordonnées tridimensionnelles. Le champ de vitesse horizontale montre la rotation antihoraire de la plaque nord-américaine avec une vitesse moyenne de 16,8±0,7 mm/an dans un modèle sans rotation nette (no-net-rotation) par rapport à l’ITRF2008. Le champ de vitesse verticale confirme un soulèvement dû à l’ajustement isostatique glaciaire partout dans l’est du Canada avec un taux maximal de 13,7±1,2 mm/an et un affaissement vers le sud, principalement au nord des États-Unis, avec un taux typique de −1 à −2 mm/an et un taux minimum de −2,7±1,4 mm/an. Le comportement du bruit des séries temporelles des coordonnées GPS tridimensionnelles a été analysé en utilisant une analyse spectrale et la méthode du maximum de vraisemblance pour tester cinq modèles de bruit: loi de puissance; bruit blanc; bruit blanc et bruit de scintillation; bruit blanc et marche aléatoire; bruit blanc, bruit de scintillation et marche aléatoire. Les résultats montrent que la combinaison bruit blanc et bruit de scintillation est le meilleur modèle pour décrire la partie stochastique des séries temporelles. Les amplitudes de tous les modèles de bruit sont plus faibles dans la direction nord et plus grandes dans la direction verticale. Les amplitudes du bruit blanc sont à peu près égales à travers la zone d’étude et sont donc surpassées, dans toutes les directions, par le bruit de scintillation et de marche aléatoire. Le modèle de bruit de scintillation augmente l’incertitude des vitesses estimées par un facteur de 5 à 38 par rapport au modèle de bruit blanc. Les vitesses estimées de tous les modèles de bruit sont statistiquement cohérentes. Les paramètres estimés du pôle eulérien de rotation pour cette région sont légèrement, mais significativement, différents de la rotation globale de la plaque nord-américaine. Cette différence reflète potentiellement les contraintes locales dans cette région sismique et les contraintes causées par la différence des vitesses intraplaques entre les deux rives du fleuve Saint-Laurent. La déformation de la croûte terrestre de la région a été étudiée en utilisant la méthode de collocation par moindres carrés. Les vitesses horizontales interpolées montrent un mouvement cohérent spatialement: soit un mouvement radial vers l’extérieur pour les centres de soulèvement maximal au nord et un mouvement radial vers l’intérieur pour les centres d’affaissement maximal au sud, avec une vitesse typique de 1 à 1,6±0,4 mm/an. Cependant, ce modèle devient plus complexe près des marges des anciennes zones glaciaires. Basées selon leurs directions, les vitesses horizontales intraplaques peuvent être divisées en trois zones distinctes. Cela confirme les conclusions d’autres chercheurs sur l’existence de trois dômes de glace dans la région d’étude avant le dernier maximum glaciaire. Une corrélation spatiale est observée entre les zones de vitesses horizontales intraplaques de magnitude plus élevée et les zones sismiques le long du fleuve Saint-Laurent. Les vitesses verticales ont ensuite été interpolées pour modéliser la déformation verticale. Le modèle montre un taux de soulèvement maximal de 15,6 mm/an au sud-est de la baie d’Hudson et un taux d’affaissement typique de 1 à 2 mm/an au sud, principalement dans le nord des États-Unis. Le long du fleuve Saint-Laurent, les mouvements horizontaux et verticaux sont cohérents spatialement. Il y a un déplacement vers le sud-est d’une magnitude d’environ 1,3 mm/an et un soulèvement moyen de 3,1 mm/an par rapport à la plaque l’Amérique du Nord. Le taux de déformation verticale est d’environ 2,4 fois plus grand que le taux de déformation horizontale intraplaque. Les résultats de l’analyse de déformation montrent l’état actuel de déformation dans l’est du Canada sous la forme d’une expansion dans la partie nord (la zone se soulève) et d’une compression dans la partie sud (la zone s’affaisse). Les taux de rotation sont en moyenne de 0,011°/Ma. Nous avons observé une compression NNO-SSE avec un taux de 3.6 à 8.1 nstrain/an dans la zone sismique du Bas-Saint-Laurent. Dans la zone sismique de Charlevoix, une expansion avec un taux de 3,0 à 7,1 nstrain/an est orientée ENE-OSO. Dans la zone sismique de l’Ouest du Québec, la déformation a un mécanisme de cisaillement avec un taux de compression de 1,0 à 5,1 nstrain/an et un taux d’expansion de 1.6 à 4.1 nstrain/an. Ces mesures sont conformes, au premier ordre, avec les modèles d’ajustement isostatique glaciaire et avec la contrainte de compression horizontale maximale du projet World Stress Map, obtenue à partir de la théorie des mécanismes focaux (focal mechanism method).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, the relationship between diameter at breast height (d) and total height (h) of individual-tree was modeled with the aim to establish provisory height-diameter (h-d) equations for maritime pine (Pinus pinaster Ait.) stands in the Lomba ZIF, Northeast Portugal. Using data collected locally, several local and generalized h-d equations from the literature were tested and adaptations were also considered. Model fitting was conducted by using usual nonlinear least squares (nls) methods. The best local and generalized models selected, were also tested as mixed models applying a first-order conditional expectation (FOCE) approximation procedure and maximum likelihood methods to estimate fixed and random effects. For the calibration of the mixed models and in order to be consistent with the fitting procedure, the FOCE method was also used to test different sampling designs. The results showed that the local h-d equations with two parameters performed better than the analogous models with three parameters. However a unique set of parameter values for the local model can not be used to all maritime pine stands in Lomba ZIF and thus, a generalized model including covariates from the stand, in addition to d, was necessary to obtain an adequate predictive performance. No evident superiority of the generalized mixed model in comparison to the generalized model with nonlinear least squares parameters estimates was observed. On the other hand, in the case of the local model, the predictive performance greatly improved when random effects were included. The results showed that the mixed model based in the local h-d equation selected is a viable alternative for estimating h if variables from the stand are not available. Moreover, it was observed that it is possible to obtain an adequate calibrated response using only 2 to 5 additional h-d measurements in quantile (or random) trees from the distribution of d in the plot (stand). Balancing sampling effort, accuracy and straightforwardness in practical applications, the generalized model from nls fit is recommended. Examples of applications of the selected generalized equation to the forest management are presented, namely how to use it to complete missing information from forest inventory and also showing how such an equation can be incorporated in a stand-level decision support system that aims to optimize the forest management for the maximization of wood volume production in Lomba ZIF maritime pine stands.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A plethora of methods for procuring building projects are available to meet the needs of clients. Deciding what method to use for a given project is a difficult and challenging task as a client’s objectives and priorities need to marry with the selected method so as to improve the likelihood of the project being procured successfully. The decision as to what procurement system to use should be made as early as possible and underpinned by the client’s business case for the project. The risks and how they can potentially affect the client’s business should also be considered. In this report, the need for client’s to develop a procurement strategy, which outlines the key means by which the objectives of the project are to be achieved is emphasised. Once a client has established a business case for a project, appointed a principal advisor, determined their requirements and brief, then consideration as to which procurement method to be adopted should be made. An understanding of the characteristics of various procurement options is required before a recommendation can be made to a client. Procurement systems can be categorised as traditional, design and construct, management and collaborative. The characteristics of these systems along with the procurement methods commonly used are described. The main advantages and disadvantages, and circumstances under which a system could be considered applicable for a given project are also identified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multivariate methods are required to assess the interrelationships among multiple, concurrent symptoms. We examined the conceptual and contextual appropriateness of commonly used multivariate methods for cancer symptom cluster identification. From 178 publications identified in an online database search of Medline, CINAHL, and PsycINFO, limited to articles published in English, 10 years prior to March 2007, 13 cross-sectional studies met the inclusion criteria. Conceptually, common factor analysis (FA) and hierarchical cluster analysis (HCA) are appropriate for symptom cluster identification, not principal component analysis. As a basis for new directions in symptom management, FA methods are more appropriate than HCA. Principal axis factoring or maximum likelihood factoring, the scree plot, oblique rotation, and clinical interpretation are recommended approaches to symptom cluster identification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional speech enhancement methods optimise signal-level criteria such as signal-to-noise ratio, but these approaches are sub-optimal for noise-robust speech recognition. Likelihood-maximising (LIMA) frameworks are an alternative that optimise parameters of enhancement algorithms based on state sequences generated for utterances with known transcriptions. Previous reports of LIMA frameworks have shown significant promise for improving speech recognition accuracies under additive background noise for a range of speech enhancement techniques. In this paper we discuss the drawbacks of the LIMA approach when multiple layers of acoustic mismatch are present – namely background noise and speaker accent. Experimentation using LIMA-based Mel-filterbank noise subtraction on American and Australian English in-car speech databases supports this discussion, demonstrating that inferior speech recognition performance occurs when a second layer of mismatch is seen during evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional speech enhancement methods optimise signal-level criteria such as signal-to-noise ratio, but such approaches are sub-optimal for noise-robust speech recognition. Likelihood-maximising (LIMA) frameworks on the other hand, optimise the parameters of speech enhancement algorithms based on state sequences generated by a speech recogniser for utterances of known transcriptions. Previous applications of LIMA frameworks have generated a set of global enhancement parameters for all model states without taking in account the distribution of model occurrence, making optimisation susceptible to favouring frequently occurring models, in particular silence. In this paper, we demonstrate the existence of highly disproportionate phonetic distributions on two corpora with distinct speech tasks, and propose to normalise the influence of each phone based on a priori occurrence probabilities. Likelihood analysis and speech recognition experiments verify this approach for improving ASR performance in noisy environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.