949 resultados para logic tree, logicFS, Monte Carlo logic regression, genetic programming for association study, random forest, GENICA


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Irregular atrial pressure, defective folate and cholesterol metabolism contribute to the pathogenesis of hypertension. However, little is known about the combined roles of the methylenetetrahydrofolate reductase (MTHFR), apolipoprotein-E (ApoE) and angiotensin-converting enzyme (ACE) genes, which are involved in metabolism and homeostasis. The objective of this study is to investigate the association of the MTHFR 677 C>T and 1298A>C, ACE insertion–deletion (I/D) and ApoE genetic polymorphisms with hypertension and to further explore the epistasis interactions that are involved in these mechanisms. A total of 594 subjects, including 348 normotensive and 246 hypertensive ischemic stroke subjects were recruited. The MTHFR 677 C>T and 1298A>C, ACE I/D and ApoEpolymorphisms were genotyped and the epistasis interaction were analyzed. The MTHFR 677 C>T and ApoE polymorphisms demonstrated significant associations with susceptibility to hypertension in multiple logistic regression models, multifactor dimensionality reduction and a classification and regression tree. In addition, the logistic regression model demonstrated that significant interactions between the ApoE E3E3, E2E4, E2E2 and MTHFR 677 C>T polymorphisms existed. In conclusion, the results of this epistasis study indicated significant association between the ApoE and MTHFR polymorphisms and hypertension.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an extension to the Rapidly-exploring Random Tree (RRT) algorithm applied to autonomous, drifting underwater vehicles. The proposed algorithm is able to plan paths that guarantee convergence in the presence of time-varying ocean dynamics. The method utilizes 4-Dimensional, ocean model prediction data as an evolving basis for expanding the tree from the start location to the goal. The performance of the proposed method is validated through Monte-Carlo simulations. Results illustrate the importance of the temporal variance in path execution, and demonstrate the convergence guarantee of the proposed methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective Several genetic risk variants for ankylosing spondylitis (AS) have been identified in genome-wide association studies. Our objective was to examine whether familial AS cases have a higher genetic load of these susceptibility variants. Methods Overall, 502 AS patients were examined, consisting of 312 patients who had first-degree relatives (FDRs) with AS (familial) and 190 patients who had no FDRs with AS or spondylarthritis (sporadic). All patients and affected FDRs fulfilled the modified New York criteria for AS. The patients were recruited from 2 US cohorts (the North American Spondylitis Consortium and the Prospective Study of Outcomes in Ankylosing Spondylitis) and from the UK-Oxford cohort. The frequencies of AS susceptibility loci in IL-23R, IL1R2, ANTXR2, ERAP-1, 2 intergenic regions on chromosomes 2p15 and 21q22, and HLA-B27 status as determined by the tag single-nucleotide polymorphism (SNP) rs4349859 were compared between familial and sporadic cases of AS. Association between SNPs and multiplex status was assessed by logistic regression controlling for sibship size. Results HLA-B27 was significantly more prevalent in familial than sporadic cases of AS (odds ratio 4.44 [95% confidence interval 2.06, 9.55], P = 0.0001). Furthermore, the AS risk allele at chromosome 21q22 intergenic region showed a trend toward higher frequency in the multiplex cases (P = 0.08). The frequency of the other AS risk variants did not differ significantly between familial and sporadic cases, either individually or combined. Conclusion HLA-B27 is more prevalent in familial than sporadic cases of AS, demonstrating higher familial aggregation of AS in patients with HLA-B27 positivity. The frequency of the recently described non-major histocompatibility complex susceptibility loci is not markedly different between the sporadic and familial cases of AS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Change point estimation is recognized as an essential tool of root cause analyses within quality control programs as it enables clinical experts to search for potential causes of change in hospital outcomes more effectively. In this paper, we consider estimation of the time when a linear trend disturbance has occurred in survival time following an in-control clinical intervention in the presence of variable patient mix. To model the process and change point, a linear trend in the survival time of patients who underwent cardiac surgery is formulated using hierarchical models in a Bayesian framework. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. We use Markov Chain Monte Carlo to obtain posterior distributions of the change point parameters including the location and the slope size of the trend and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time cumulative sum control chart (CUSUM) control charts for different trend scenarios. In comparison with the alternatives, step change point model and built-in CUSUM estimator, more accurate and precise estimates are obtained by the proposed Bayesian estimator over linear trends. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between major depressive disorder (MDD) and bipolar disorder (BD) remains controversial. Previous research has reported differences and similarities in risk factors for MDD and BD, such as predisposing personality traits. For example, high neuroticism is related to both disorders, whereas openness to experience is specific for BD. This study examined the genetic association between personality and MDD and BD by applying polygenic scores for neuroticism, extraversion, openness to experience, agreeableness and conscientiousness to both disorders. Polygenic scores reflect the weighted sum of multiple single-nucleotide polymorphism alleles associated with the trait for an individual and were based on a meta-analysis of genome-wide association studies for personality traits including 13,835 subjects. Polygenic scores were tested for MDD in the combined Genetic Association Information Network (GAIN-MDD) and MDD2000+ samples (N=8921) and for BD in the combined Systematic Treatment Enhancement Program for Bipolar Disorder and Wellcome Trust Case-Control Consortium samples (N=6329) using logistic regression analyses. At the phenotypic level, personality dimensions were associated with MDD and BD. Polygenic neuroticism scores were significantly positively associated with MDD, whereas polygenic extraversion scores were significantly positively associated with BD. The explained variance of MDD and BD, approximately 0.1%, was highly comparable to the variance explained by the polygenic personality scores in the corresponding personality traits themselves (between 0.1 and 0.4%). This indicates that the proportions of variance explained in mood disorders are at the upper limit of what could have been expected. This study suggests shared genetic risk factors for neuroticism and MDD on the one hand and for extraversion and BD on the other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Population dynamics are generally viewed as the result of intrinsic (purely density dependent) and extrinsic (environmental) processes. Both components, and potential interactions between those two, have to be modelled in order to understand and predict dynamics of natural populations; a topic that is of great importance in population management and conservation. This thesis focuses on modelling environmental effects in population dynamics and how effects of potentially relevant environmental variables can be statistically identified and quantified from time series data. Chapter I presents some useful models of multiplicative environmental effects for unstructured density dependent populations. The presented models can be written as standard multiple regression models that are easy to fit to data. Chapters II IV constitute empirical studies that statistically model environmental effects on population dynamics of several migratory bird species with different life history characteristics and migration strategies. In Chapter II, spruce cone crops are found to have a strong positive effect on the population growth of the great spotted woodpecker (Dendrocopos major), while cone crops of pine another important food resource for the species do not effectively explain population growth. The study compares rate- and ratio-dependent effects of cone availability, using state-space models that distinguish between process and observation error in the time series data. Chapter III shows how drought, in combination with settling behaviour during migration, produces asymmetric spatially synchronous patterns of population dynamics in North American ducks (genus Anas). Chapter IV investigates the dynamics of a Finnish population of skylark (Alauda arvensis), and point out effects of rainfall and habitat quality on population growth. Because the skylark time series and some of the environmental variables included show strong positive autocorrelation, the statistical significances are calculated using a Monte Carlo method, where random autocorrelated time series are generated. Chapter V is a simulation-based study, showing that ignoring observation error in analyses of population time series data can bias the estimated effects and measures of uncertainty, if the environmental variables are autocorrelated. It is concluded that the use of state-space models is an effective way to reach more accurate results. In summary, there are several biological assumptions and methodological issues that can affect the inferential outcome when estimating environmental effects from time series data, and that therefore need special attention. The functional form of the environmental effects and potential interactions between environment and population density are important to deal with. Other issues that should be considered are assumptions about density dependent regulation, modelling potential observation error, and when needed, accounting for spatial and/or temporal autocorrelation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Located in the Pacific Ocean between Australia and New Zealand, the unique population isolate of Norfolk Island has been shown to exhibit increased prevalence of metabolic disorders (type-2 diabetes, cardiovascular disease) compared to mainland Australia. We investigated this well-established genetic isolate, utilising its unique genomic structure to increase the ability to detect related genetic markers. A pedigree-based genome-wide association study of 16 routinely collected blood-based clinical traits in 382 Norfolk Island individuals was performed. Results A striking association peak was located at chromosome 2q37.1 for both total bilirubin and direct bilirubin, with 29 SNPs reaching statistical significance (P < 1.84 × 10−7). Strong linkage disequilibrium was observed across a 200 kb region spanning the UDP-glucuronosyltransferase family, including UGT1A1, an enzyme known to metabolise bilirubin. Given the epidemiological literature suggesting negative association between CVD-risk and serum bilirubin we further explored potential associations using stepwise multivariate regression, revealing significant association between direct bilirubin concentration and type-2 diabetes risk. In the Norfolk Island cohort increased direct bilirubin was associated with a 28 % reduction in type-2 diabetes risk (OR: 0.72, 95 % CI: 0.57-0.91, P = 0.005). When adjusted for genotypic effects the overall model was validated, with the adjusted model predicting a 30 % reduction in type-2 diabetes risk with increasing direct bilirubin concentrations (OR: 0.70, 95 % CI: 0.53-0.89, P = 0.0001). Conclusions In summary, a pedigree-based GWAS of blood-based clinical traits in the Norfolk Island population has identified variants within the UDPGT family directly associated with serum bilirubin levels, which is in turn implicated with reduced risk of developing type-2 diabetes within this population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel methodology for modeling the effects of process variations on circuit delay performance is proposed by relating the variations in process parameters to variations in delay metric of a complex digital circuit. The delay of a 2-input NAND gate with 65nm gate length transistors is extensively characterized by mixed-mode simulations which is then used as a library element. The variation in saturation current Ionat the device level, and the variation in rising/falling edge stage delay for the NAND gate at the circuit level, are taken as performance metrics. A 4-bit x 4-bit Wallace tree multiplier circuit is used as a representative combinational circuit to demonstrate the proposed methodology. The variation in the multiplier delay is characterized, to obtain delay distributions, by an extensive Monte Carlo analysis. An analytical model based on CV/I metric is proposed, to extend this methodology for a generic technology library with a variety of library elements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular markers have been demonstrated to be useful for the estimation of stock mixture proportions where the origin of individuals is determined from baseline samples. Bayesian statistical methods are widely recognized as providing a preferable strategy for such analyses. In general, Bayesian estimation is based on standard latent class models using data augmentation through Markov chain Monte Carlo techniques. In this study, we introduce a novel approach based on recent developments in the estimation of genetic population structure. Our strategy combines analytical integration with stochastic optimization to identify stock mixtures. An important enhancement over previous methods is the possibility of appropriately handling data where only partial baseline sample information is available. We address the potential use of nonmolecular, auxiliary biological information in our Bayesian model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta dissertação investiga a aplicação dos algoritmos evolucionários inspirados na computação quântica na síntese de circuitos sequenciais. Os sistemas digitais sequenciais representam uma classe de circuitos que é capaz de executar operações em uma determinada sequência. Nos circuitos sequenciais, os valores dos sinais de saída dependem não só dos valores dos sinais de entrada como também do estado atual do sistema. Os requisitos cada vez mais exigentes quanto à funcionalidade e ao desempenho dos sistemas digitais exigem projetos cada vez mais eficientes. O projeto destes circuitos, quando executado de forma manual, se tornou demorado e, com isso, a importância das ferramentas para a síntese automática de circuitos cresceu rapidamente. Estas ferramentas conhecidas como ECAD (Electronic Computer-Aided Design) são programas de computador normalmente baseados em heurísticas. Recentemente, os algoritmos evolucionários também começaram a ser utilizados como base para as ferramentas ECAD. Estas aplicações são referenciadas na literatura como eletrônica evolucionária. Os algoritmos mais comumente utilizados na eletrônica evolucionária são os algoritmos genéticos e a programação genética. Este trabalho apresenta um estudo da aplicação dos algoritmos evolucionários inspirados na computação quântica como uma ferramenta para a síntese automática de circuitos sequenciais. Esta classe de algoritmos utiliza os princípios da computação quântica para melhorar o desempenho dos algoritmos evolucionários. Tradicionalmente, o projeto dos circuitos sequenciais é dividido em cinco etapas principais: (i) Especificação da máquina de estados; (ii) Redução de estados; (iii) Atribuição de estados; (iv) Síntese da lógica de controle e (v) Implementação da máquina de estados. O Algoritmo Evolucionário Inspirado na Computação Quântica (AEICQ) proposto neste trabalho é utilizado na etapa de atribuição de estados. A escolha de uma atribuição de estados ótima é tratada na literatura como um problema ainda sem solução. A atribuição de estados escolhida para uma determinada máquina de estados tem um impacto direto na complexidade da sua lógica de controle. Os resultados mostram que as atribuições de estados obtidas pelo AEICQ de fato conduzem à implementação de circuitos de menor complexidade quando comparados com os circuitos gerados a partir de atribuições obtidas por outros métodos. O AEICQ e utilizado também na etapa de síntese da lógica de controle das máquinas de estados. Os circuitos evoluídos pelo AEICQ são otimizados segundo a área ocupada e o atraso de propagação. Estes circuitos são compatíveis com os circuitos obtidos por outros métodos e em alguns casos até mesmo superior em termos de área e de desempenho, sugerindo que existe um potencial de aplicação desta classe de algoritmos no projeto de circuitos eletrônicos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente trabalho investiga um método de detecção de anomalias baseado em sistemas imunológicos artificiais, especificamente em uma técnica de reconhecimento próprio/não-próprio chamada algoritmo de seleção negativa (NSA). Foi utilizado um esquema de representação baseado em hiperesferas com centros e raios variáveis e um modelo capaz de gerar detectores, com esta representação, de forma eficiente. Tal modelo utiliza algoritmos genéticos onde cada gene do cromossomo contém um índice para um ponto de uma distribuição quasi-aleatória que servirá como centro do detector e uma função decodificadora responsável por determinar os raios apropriados. A aptidão do cromossomo é dada por uma estimativa do volume coberto através uma integral de Monte Carlo. Este algoritmo teve seu desempenho verificado em diferentes dimensões e suas limitações levantadas. Com isso, pode-se focar as melhorias no algoritmo, feitas através da implementação de operadores genéticos mais adequados para a representação utilizada, de técnicas de redução do número de pontos do conjunto próprio e de um método de pré-processamento baseado em bitmaps de séries temporais. Avaliações com dados sintéticos e experimentos com dados reais demonstram o bom desempenho do algoritmo proposto e a diminuição do tempo de execução.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Annual abundance estimates of belugas, Delphinapterus leucas, in Cook Inlet were calculated from counts made by aerial observers and aerial video recordings. Whale group-size estimates were corrected for subsurface whales (availability bias) and whales that were at the surface but were missed (detection bias). Logistic regression was used to estimate the probability that entire groups were missed during the systematic surveys, and the results were used to calculate a correction to account for the whales in these missed groups (1.015, CV = 0.03 in 1994–98; 1.021, CV = 0.01 in 1999– 2000). Calculated abundances were 653 (CV = 0.43) in 1994, 491 (CV = 0.44) in 1995, 594 (CV = 0.28) in 1996, 440 (CV = 0.14) in 1997, 347 (CV = 0.29) in 1998, 367 (CV = 0.14) in 1999, and 435 (CV = 0.23, 95% CI=279–679) in 2000. For management purposes the current Nbest = 435 and Nmin = 360. These estimates replace preliminary estimates of 749 for 1994 and 357 for 1999. Monte Carlo simulations indicate a 47% probability that from June 1994 to June 1998 abundance of the Cook Inlet stock of belugas was depleted by 50%. The decline appears to have stopped in 1998.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report a Monte Carlo representation of the long-term inter-annual variability of monthly snowfall on a detailed (1 km) grid of points throughout the southwest. An extension of the local climate model of the southwestern United States (Stamm and Craig 1992) provides spatially based estimates of mean and variance of monthly temperature and precipitation. The mean is the expected value from a canonical regression using independent variables that represent controls on climate in this area, including orography. Variance is computed as the standard error of the prediction and provides site-specific measures of (1) natural sources of variation and (2) errors due to limitations of the data and poor distribution of climate stations. Simulation of monthly temperature and precipitation over a sequence of years is achieved by drawing from a bivariate normal distribution. The conditional expectation of precipitation. given temperature in each month, is the basis of a numerical integration of the normal probability distribution of log precipitation below a threshold temperature (3°C) to determine snowfall as a percent of total precipitation. Snowfall predictions are tested at stations for which long-term records are available. At Donner Memorial State Park (elevation 1811 meters) a 34-year simulation - matching the length of instrumental record - is within 15 percent of observed for mean annual snowfall. We also compute resulting snowpack using a variation of the model of Martinec et al. (1983). This allows additional tests by examining spatial patterns of predicted snowfall and snowpack and their hydrologic implications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Algorithms are presented for detection and tracking of multiple clusters of co-ordinated targets. Based on a Markov chain Monte Carlo sampling mechanization, the new algorithms maintain a discrete approximation of the filtering density of the clusters' state. The filters' tracking efficiency is enhanced by incorporating various sampling improvement strategies into the basic Metropolis-Hastings scheme. Thus, an evolutionary stage consisting of two primary steps is introduced: 1) producing a population of different chain realizations, and 2) exchanging genetic material between samples in this population. The performance of the resulting evolutionary filtering algorithms is demonstrated in two different settings. In the first, both group and target properties are estimated whereas in the second, which consists of a very large number of targets, only the clustering structure is maintained. © 2009 IFAC.