984 resultados para Meta-heuristics algorithms
Resumo:
Elevated serum uric acid levels cause gout and are a risk factor for cardiovascular disease and diabetes. To investigate the polygenetic basis of serum uric acid levels, we conducted a meta-analysis of genome-wide association scans from 14 studies totalling 28,141 participants of European descent, resulting in identification of 954 SNPs distributed across nine loci that exceeded the threshold of genome-wide significance, five of which are novel. Overall, the common variants associated with serum uric acid levels fall in the following nine regions: SLC2A9 (p = 5.2x10(-201)), ABCG2 (p = 3.1x10(-26)), SLC17A1 (p = 3.0x10(-14)), SLC22A11 (p = 6.7x10(-14)), SLC22A12 (p = 2.0x10(-9)), SLC16A9 (p = 1.1x10(-8)), GCKR (p = 1.4x10(-9)), LRRC16A (p = 8.5x10(-9)), and near PDZK1 (p = 2.7x10(-9)). Identified variants were analyzed for gender differences. We found that the minor allele for rs734553 in SLC2A9 has greater influence in lowering uric acid levels in women and the minor allele of rs2231142 in ABCG2 elevates uric acid levels more strongly in men compared to women. To further characterize the identified variants, we analyzed their association with a panel of metabolites. rs12356193 within SLC16A9 was associated with DL-carnitine (p = 4.0x10(-26)) and propionyl-L-carnitine (p = 5.0x10(-8)) concentrations, which in turn were associated with serum UA levels (p = 1.4x10(-57) and p = 8.1x10(-54), respectively), forming a triangle between SNP, metabolites, and UA levels. Taken together, these associations highlight additional pathways that are important in the regulation of serum uric acid levels and point toward novel potential targets for pharmacological intervention to prevent or treat hyperuricemia. In addition, these findings strongly support the hypothesis that transport proteins are key in regulating serum uric acid levels.
Resumo:
Randomized, controlled trials have demonstrated efficacy for second-generation antipsychotics in the treatment of acute mania in bipolar disorder. Despite depression being considered the hallmark of bipolar disorder, there are no published systematic reviews or meta-analyses to evaluate the efficacy of modern atypical antipsychotics in bipolar depression. We systematically reviewed published or registered randomized, double-blind, placebo-controlled trials (RCTs) of modern antipsychotics in adult bipolar I and/or II depressive patients (DSM-IV criteria). Efficacy outcomes were assessed based on changes in the Montgomery-Asberg Depression Rating Scale (MADRS) during an 8-wk period. Data were combined through meta-analysis using risk ratio as an effect size with a 95% confidence interval (95% CI) and with a level of statistical significance of 5% (p<0.05). We identified five RCTs; four involved antipsychotic monotherapy and one addressed both monotherapy and combination with an antidepressant. The two quetiapine trials analysed the safety and efficacy of two doses: 300 and 600 mg/d. The only olanzapine trial assessed olanzapine monotherapy within a range of 5-20 mg/d and olanzapine-fluoxetine combination within a range of 5-20 mg/d and 6-12 mg/d, respectively. The two aripiprazole placebo-controlled trials assessed doses of 5-30 mg/d. Quetiapine and olanzapine trials (3/5, 60%) demonstrated superiority over placebo (p<0.001). Only 2/5 (40%) (both aripiprazole trials) failed in the primary efficacy measure after the first 6 wk. Some modern antipsychotics (quetiapine and olanzapine) have demonstrated efficacy in bipolar depressive patients from week 1 onwards. Rapid onset of action seems to be a common feature of atypical antipsychotics in bipolar depression. Comment in The following popper user interface control may not be accessible. Tab to the next button to revert the control to an accessible version.Destroy user interface controlEfficacy of modern antipsychotics in placebo-controlled trials in bipolar depression: a meta-analysis--results to be interpreted with caution.
Resumo:
Background: Despite the widespread use of interferon-gamma release assays (IGRAs), their role in diagnosing tuberculosis and targeting preventive therapy in HIV-infected patients remains unclear. We conducted a comprehensive systematic review to contribute to the evidence-based practice in HIV-infected people. Methodology/Principal Findings: We searched MEDLINE, Cochrane, and Biomedicine databases to identify articles published between January 2005 and July 2011 that assessed QuantiFERON H -TB Gold In-Tube (QFT-GIT) and T-SPOT H .TB (T-SPOT.TB) in HIV-infected adults. We assessed their accuracy for the diagnosis of tuberculosis and incident active tuberculosis, and the proportion of indeterminate results. The search identified 38 evaluable studies covering a total of 6514 HIV-infected participants. The pooled sensitivity and specificity for tuberculosis were 61% and 72% for QFT-GIT, and 65% and 70% for T-SPOT.TB. The cumulative incidence of subsequent active tuberculosis was 8.3% for QFT-GIT and 10% for T-SPOT.TB in patients tested positive (one study each), and 0% for QFT-GIT (two studies) and T-SPOT.TB (one study) respectively in those tested negative. Pooled indeterminate rates were 8.2% for QFT-GIT and 5.9% for T-SPOT.TB. Rates were higher in high burden settings (12.0% for QFT-GIT and 7.7% for T-SPOT.TB) than in low-intermediate burden settings (3.9% for QFT-GIT and 4.3% for T-SPOT.TB). They were also higher in patients with CD4 + T-cell count, 200 (11.6% for QFT-GIT and 11.4% for T-SPOT.TB) than in those with CD4 + T-cell count $ 200 (3.1% for QFT-GIT and 7.9% for T-SPOT.TB). Conclusions/Significance: IGRAs have suboptimal accuracy for confirming or ruling out active tuberculosis disease in HIV-infected adults. While their predictive value for incident active tuberculosis is modest, a negative QFT-GIT implies a very low short- to medium-term risk. Identifying the factors associated with indeterminate results will help to optimize the use of IGRAs in clinical practice, particularly in resource-limited countries with a high prevalence of HIV-coinfection.
Resumo:
O objetivo deste trabalho foi avaliar, por meio da meta-análise, o efeito da fitase e da xilanase sobre a digestibilidade ileal aparente (DIa) de aminoácidos, cálcio e fósforo, em suínos em fase de crescimento. A base de dados consistiu de 21 artigos publicados entre 1998 e 2009, no total de 82 tratamentos e 644 suínos. A meta-análise foi realizada por análise gráfica, de correlação, de variância-covariância. As concentrações de fósforo fítico e as frações fibra em detergente neutro, fibra em detergente ácido e lignina em detergente ácido, nas dietas, apresentaram correlações baixas e negativas com a DIa do cálcio, fósforo e aminoácidos. A adição de fitase às dietas aumentou em 2% a DIa da arginina, em 14% a do cálcio e em 34% a do fósforo. A DIa da arginina, fenilalanina, isoleucina e lisina foi 3,3% superior em suínos alimentados com dietas com xilanase, em relação às dietas sem a enzima. O fósforo fítico e as fibras, nas dietas, reduzem a DIa do cálcio, do fósforo e dos aminoácidos essenciais. O uso de fitase e xilanase, nas dietas, melhora o aproveitamento de cálcio, fósforo e alguns aminoácidos. No entanto, o excesso de cálcio e fósforo nas dietas reduz a ação da fitase sobre a digestibilidade ileal dos nutrientes.
Resumo:
Rapport de synthèse : Description : ce travail de thèse évalue de façon systématique les études sur l'association entre les dysfonctions thyroïdiennes infracliniques d'une part, et la maladie coronarienne et la mortalité d'autre part. Les hypothyroïdies infracliniques affectent environ 4-5% de la population adulte alors que la prévalence de l'hyperthyroïdie infraclinique est inférieure (environ 1%). L'éventuelle association entre elles pourrait justifier un dépistage systématique des dysfonctions thyroïdiennes infracliniques. Les précédentes études sur l'association entre l'hypothyroïdie infraclinique et la maladie coronarienne ont donné des résultats conflictuels. La parution de nouveaux articles récents basés sur de grandes cohortes prospectives nous a permis d'effectuer une méta-analyse basée uniquement sur des études de cohorte prospectives, augmentant ainsi la validité des résultats. Résultats: 10 des 12 études identifiées pour notre revue systématique sont basées sur des cohortes issues de la population générale («population-based »), regroupant en tout 14 449 participants. Ces 10 études examinent toutes le risque associé à l'hypothyroïdie infraclinique (avec 2134 événements coronariens et 2822 décès), alors que 5 étudient également le risque associé à l'hyperthyroïdie infraclinique (avec 1392 événements coronariens et 1993 décès). En utilisant un modèle statistique de type random-effect model, le risque relatif [RR] lié à l'hypothyroïdie infraclinique pour la maladie coronarienne est de 1.20 (intervalle de confiance [IC] de 95%, 0.97 à 1.49). Le risque diminue lorsque l'on regroupe uniquement les études de meilleure qualité (RR compris entre 1.02 et 1.08). Il est plus élevé parmi les participants de moins de 65 ans (RR, 1.51 [IC, 1.09 à 2.09] et 1.05 [IC, 0.90 à 1.22] pour les études dont l'âge moyen des participants est >_ 65 ans). Le RR de la mortalité cardiovasculaire est de 1.18 (IC, 0.98 à 1.42) et de 1.12 (IC, 0.99 à 1.26) pour la mortalité totale. En cas d'hyperthyroïdie infraclinique, les RR de la maladie coronarienne sont de 1.21 (IC, 0.88 à 1.68), de 1.19 (IC, 0.81 à 1.76) pour la mortalité cardiovasculaire, et de 1.12 (IC, 0.89 à 1.42) pour la mortalité totale. Conclusions et perspectives : nos résultats montrent que les dysfonctions thyroïdiennes infracliniques (hypothyroïdie et hyperthyroïdie infracliniques) représentent un facteur de risque modifiable, bien que modéré, de la maladie coronarienne et de la mortalité. L'efficacité du traitement de ces dysfonctions thyroïdiennes infracliniques doit encore être prouvée du point de vue cardiovasculaire et de la mortalité. Il est nécessaire d'effectuer des études contrôlées contre placebo avec le risque cardiovasculaire et la mortalité comme critères d'efficacité, avant de pouvoir proposer des recommandations sur le dépistage des ces dysfonctions thyroïdiennes dans la population adulte.
Resumo:
O objetivo deste trabalho foi estimar, por meio de meta-análise, a herdabilidade (h²) e as correlações genética (r g) e fenotípica (r f) do consumo alimentar residual (CAR), e das suas características componentes, em bovinos de 19 raças ou grupamentos genéticos. Foram utilizados 22 trabalhos científicos publicados entre 1963 e 2011, de oito países, o que totalizou 52.637 bovinos com idades que variaram de 28 dias até a idade de abate. As estimativas de CAR, consumo de matéria seca (CMS), ganho médio diário (GMD) e peso metabólico (PV0, 75) foram ponderadas pelo inverso da variância amostral. A variação da h² de cada característica entre os estudos foi analisada por quadrados mínimos ponderados. Os efeitos de sexo, país e raça foram significativos para h² de CAR e explicaram 67% da variação entre os estudos. Para CMS, os efeitos de país e raça foram significativos e explicaram 96% da variação. As estimativas combinadas de h² foram: 0, 255±0, 008, 0, 278±0, 012, 0, 321±0, 015 e 0, 397±0, 032 para CAR, CMS, GMD e PV0, 75, respectivamente. As estimativas combinadas de correlação genética e fenotípica foram baixas entre CAR e GMD e entre CAR e PV0, 75 (de -0, 021±0, 034 a 0, 025±0, 035), e de média magnitude entre CAR e CMS (0, 636±0, 035 a 0, 698±0, 041) e entre CMS, GMD e PV0, 75 (0, 441±0, 062 a 0, 688±0, 032). O CAR apresenta estimativa de herdabilidade menor que a de suas características componentes.
Resumo:
Inference of Markov random field images segmentation models is usually performed using iterative methods which adapt the well-known expectation-maximization (EM) algorithm for independent mixture models. However, some of these adaptations are ad hoc and may turn out numerically unstable. In this paper, we review three EM-like variants for Markov random field segmentation and compare their convergence properties both at the theoretical and practical levels. We specifically advocate a numerical scheme involving asynchronous voxel updating, for which general convergence results can be established. Our experiments on brain tissue classification in magnetic resonance images provide evidence that this algorithm may achieve significantly faster convergence than its competitors while yielding at least as good segmentation results.
Resumo:
Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.
Resumo:
BACKGROUND: Health professionals and policymakers aspire to make healthcare decisions based on the entire relevant research evidence. This, however, can rarely be achieved because a considerable amount of research findings are not published, especially in case of 'negative' results - a phenomenon widely recognized as publication bias. Different methods of detecting, quantifying and adjusting for publication bias in meta-analyses have been described in the literature, such as graphical approaches and formal statistical tests to detect publication bias, and statistical approaches to modify effect sizes to adjust a pooled estimate when the presence of publication bias is suspected. An up-to-date systematic review of the existing methods is lacking. METHODS/DESIGN: The objectives of this systematic review are as follows:âeuro¢ To systematically review methodological articles which focus on non-publication of studies and to describe methods of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses.âeuro¢ To appraise strengths and weaknesses of methods, the resources they require, and the conditions under which the method could be used, based on findings of included studies.We will systematically search Web of Science, Medline, and the Cochrane Library for methodological articles that describe at least one method of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses. A dedicated data extraction form is developed and pilot-tested. Working in teams of two, we will independently extract relevant information from each eligible article. As this will be a qualitative systematic review, data reporting will involve a descriptive summary. DISCUSSION: Results are expected to be publicly available in mid 2013. This systematic review together with the results of other systematic reviews of the OPEN project (To Overcome Failure to Publish Negative Findings) will serve as a basis for the development of future policies and guidelines regarding the assessment and handling of publication bias in meta-analyses.
Resumo:
Networks are evolving toward a ubiquitous model in which heterogeneousdevices are interconnected. Cryptographic algorithms are required for developing securitysolutions that protect network activity. However, the computational and energy limitationsof network devices jeopardize the actual implementation of such mechanisms. In thispaper, we perform a wide analysis on the expenses of launching symmetric and asymmetriccryptographic algorithms, hash chain functions, elliptic curves cryptography and pairingbased cryptography on personal agendas, and compare them with the costs of basic operatingsystem functions. Results show that although cryptographic power costs are high and suchoperations shall be restricted in time, they are not the main limiting factor of the autonomyof a device.
Resumo:
Abstract
Resumo:
The World Wide Web, the world¿s largest resource for information, has evolved from organizing information using controlled, top-down taxonomies to a bottom up approach that emphasizes assigning meaning to data via mechanisms such as the Social Web (Web 2.0). Tagging adds meta-data, (weak semantics) to the content available on the web. This research investigates the potential for repurposing this layer of meta-data. We propose a multi-phase approach that exploits user-defined tags to identify and extract domain-level concepts. We operationalize this approach and assess its feasibility by application to a publicly available tag repository. The paper describes insights gained from implementing and applying the heuristics contained in the approach, as well as challenges and implications of repurposing tags for extraction of domain-level concepts.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
To newly identify loci for age at natural menopause, we carried out a meta-analysis of 22 genome-wide association studies (GWAS) in 38,968 women of European descent, with replication in up to 14,435 women. In addition to four known loci, we identified 13 loci newly associated with age at natural menopause (at P < 5 × 10(-8)). Candidate genes located at these newly associated loci include genes implicated in DNA repair (EXO1, HELQ, UIMC1, FAM175A, FANCI, TLK1, POLG and PRIM1) and immune function (IL11, NLRP11 and PRRC2A (also known as BAT2)). Gene-set enrichment pathway analyses using the full GWAS data set identified exoDNase, NF-κB signaling and mitochondrial dysfunction as biological processes related to timing of menopause.