996 resultados para Encoding methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work was to assess, during six years, the temporal stability of natural rubber yield of 25 superior Hevea brasiliensis genotypes, using the Wricke, Eberhart & Russell, Lin & Binns, additive main effect and multiplicative interaction (AMMI) analysis, and harmonic mean of the relative performance of the genetic values (HMRPGV) methods. The IAC 40 and IAC 300 genotypes were identified as stable and high yielding by the Eberhart & Russell, Lin & Binns, HMRPGV, and AMMI Biplot methods. The ranking of the other more stable genotypes identified by these analyses was altered. The observed results in the AMMI Biplot agreed with those observed in the Wricke method for identifying stable, but lower yielding genotypes. The simultaneous use of different methods allows a more accurate indication of stable genotypes. Stability analyses based on different principles show agreement in indicating stable genotypes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In anticipation of regulation involving numeric turbidity limit at highway construction sites, research was done into the most appropriate, affordable methods for surface water monitoring. Measuring sediment concentration in streams may be conducted a number of ways. As part of a project funded by the Iowa Department of Transportation, several testing methods were explored to determine the most affordable, appropriate methods for data collection both in the field and in the lab. The primary purpose of the research was to determine the exchangeability of the acrylic transparency tube for water clarity analysis as compared to the turbidimeter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Elevated plasma levels of lipoprotein-associated phospholipase A(2) (Lp-PLA2) activity have been shown to be associated with increased risk of coronary heart disease and an inhibitor of this enzyme is under development for the treatment of that condition. A Val279Phe null allele in this gene, that may influence patient eligibility for treatment, is relatively common in East Asians but has not been observed in Europeans. We investigated the existence and functional effects of low frequency alleles in a Western European population by re-sequencing the exons of PLA2G7 in 2000 samples. In all, 19 non-synonymous single-nucleotide polymorphisms (nsSNPs) were found, 14 in fewer than four subjects (minor allele frequency <0.1%). Lp-PLA2 activity was significantly lower in rare nsSNP carriers compared with non-carriers (167.8±63.2 vs 204.6±41.8, P=0.01) and seven variants had enzyme activities consistent with a null allele. The cumulative frequency of these null alleles was 0.25%, so <1 in 10,000 Europeans would be expected to be homozygous, and thus not potentially benefit from treatment with an Lp-PLA2 inhibitor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les catastrophes sont souvent perçues comme des événements rapides et aléatoires. Si les déclencheurs peuvent être soudains, les catastrophes, elles, sont le résultat d'une accumulation des conséquences d'actions et de décisions inappropriées ainsi que du changement global. Pour modifier cette perception du risque, des outils de sensibilisation sont nécessaires. Des méthodes quantitatives ont été développées et ont permis d'identifier la distribution et les facteurs sous- jacents du risque.¦Le risque de catastrophes résulte de l'intersection entre aléas, exposition et vulnérabilité. La fréquence et l'intensité des aléas peuvent être influencées par le changement climatique ou le déclin des écosystèmes, la croissance démographique augmente l'exposition, alors que l'évolution du niveau de développement affecte la vulnérabilité. Chacune de ses composantes pouvant changer, le risque est dynamique et doit être réévalué périodiquement par les gouvernements, les assurances ou les agences de développement. Au niveau global, ces analyses sont souvent effectuées à l'aide de base de données sur les pertes enregistrées. Nos résultats montrent que celles-ci sont susceptibles d'être biaisées notamment par l'amélioration de l'accès à l'information. Elles ne sont pas exhaustives et ne donnent pas d'information sur l'exposition, l'intensité ou la vulnérabilité. Une nouvelle approche, indépendante des pertes reportées, est donc nécessaire.¦Les recherches présentées ici ont été mandatées par les Nations Unies et par des agences oeuvrant dans le développement et l'environnement (PNUD, l'UNISDR, la GTZ, le PNUE ou l'UICN). Ces organismes avaient besoin d'une évaluation quantitative sur les facteurs sous-jacents du risque, afin de sensibiliser les décideurs et pour la priorisation des projets de réduction des risques de désastres.¦La méthode est basée sur les systèmes d'information géographique, la télédétection, les bases de données et l'analyse statistique. Une importante quantité de données (1,7 Tb) et plusieurs milliers d'heures de calculs ont été nécessaires. Un modèle de risque global a été élaboré pour révéler la distribution des aléas, de l'exposition et des risques, ainsi que pour l'identification des facteurs de risque sous- jacent de plusieurs aléas (inondations, cyclones tropicaux, séismes et glissements de terrain). Deux indexes de risque multiples ont été générés pour comparer les pays. Les résultats incluent une évaluation du rôle de l'intensité de l'aléa, de l'exposition, de la pauvreté, de la gouvernance dans la configuration et les tendances du risque. Il apparaît que les facteurs de vulnérabilité changent en fonction du type d'aléa, et contrairement à l'exposition, leur poids décroît quand l'intensité augmente.¦Au niveau local, la méthode a été testée pour mettre en évidence l'influence du changement climatique et du déclin des écosystèmes sur l'aléa. Dans le nord du Pakistan, la déforestation induit une augmentation de la susceptibilité des glissements de terrain. Les recherches menées au Pérou (à base d'imagerie satellitaire et de collecte de données au sol) révèlent un retrait glaciaire rapide et donnent une évaluation du volume de glace restante ainsi que des scénarios sur l'évolution possible.¦Ces résultats ont été présentés à des publics différents, notamment en face de 160 gouvernements. Les résultats et les données générées sont accessibles en ligne (http://preview.grid.unep.ch). La méthode est flexible et facilement transposable à des échelles et problématiques différentes, offrant de bonnes perspectives pour l'adaptation à d'autres domaines de recherche.¦La caractérisation du risque au niveau global et l'identification du rôle des écosystèmes dans le risque de catastrophe est en plein développement. Ces recherches ont révélés de nombreux défis, certains ont été résolus, d'autres sont restés des limitations. Cependant, il apparaît clairement que le niveau de développement configure line grande partie des risques de catastrophes. La dynamique du risque est gouvernée principalement par le changement global.¦Disasters are often perceived as fast and random events. If the triggers may be sudden, disasters are the result of an accumulation of actions, consequences from inappropriate decisions and from global change. To modify this perception of risk, advocacy tools are needed. Quantitative methods have been developed to identify the distribution and the underlying factors of risk.¦Disaster risk is resulting from the intersection of hazards, exposure and vulnerability. The frequency and intensity of hazards can be influenced by climate change or by the decline of ecosystems. Population growth increases the exposure, while changes in the level of development affect the vulnerability. Given that each of its components may change, the risk is dynamic and should be reviewed periodically by governments, insurance companies or development agencies. At the global level, these analyses are often performed using databases on reported losses. Our results show that these are likely to be biased in particular by improvements in access to information. International losses databases are not exhaustive and do not give information on exposure, the intensity or vulnerability. A new approach, independent of reported losses, is necessary.¦The researches presented here have been mandated by the United Nations and agencies working in the development and the environment (UNDP, UNISDR, GTZ, UNEP and IUCN). These organizations needed a quantitative assessment of the underlying factors of risk, to raise awareness amongst policymakers and to prioritize disaster risk reduction projects.¦The method is based on geographic information systems, remote sensing, databases and statistical analysis. It required a large amount of data (1.7 Tb of data on both the physical environment and socio-economic parameters) and several thousand hours of processing were necessary. A comprehensive risk model was developed to reveal the distribution of hazards, exposure and risk, and to identify underlying risk factors. These were performed for several hazards (e.g. floods, tropical cyclones, earthquakes and landslides). Two different multiple risk indexes were generated to compare countries. The results include an evaluation of the role of the intensity of the hazard, exposure, poverty, governance in the pattern and trends of risk. It appears that the vulnerability factors change depending on the type of hazard, and contrary to the exposure, their weight decreases as the intensity increases.¦Locally, the method was tested to highlight the influence of climate change and the ecosystems decline on the hazard. In northern Pakistan, deforestation exacerbates the susceptibility of landslides. Researches in Peru (based on satellite imagery and ground data collection) revealed a rapid glacier retreat and give an assessment of the remaining ice volume as well as scenarios of possible evolution.¦These results were presented to different audiences, including in front of 160 governments. The results and data generated are made available online through an open source SDI (http://preview.grid.unep.ch). The method is flexible and easily transferable to different scales and issues, with good prospects for adaptation to other research areas. The risk characterization at a global level and identifying the role of ecosystems in disaster risk is booming. These researches have revealed many challenges, some were resolved, while others remained limitations. However, it is clear that the level of development, and more over, unsustainable development, configures a large part of disaster risk and that the dynamics of risk is primarily governed by global change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. METHODS: This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with (131)I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with (177)Lu-peptides; and case 3, hepatocellular carcinoma treated with (90)Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, D(VD), calculated assuming uniform density, was corrected for density, giving D(VDd). The average 3D-RD absorbed dose values, D(3DRD), were compared with D(VD) and D(VDd), using the relative difference Δ(VD/3DRD). At the voxel level, density-binned Δ(VD/3DRD) and Δ(VDd/3DRD) were plotted against ρ and fitted with a linear regression. RESULTS: The D(VD) calculations showed a good agreement with D(3DRD). Δ(VD/3DRD) was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the Δ(VD/3DRD) range was 0%-14% for cases 1 and 2, and -3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged Δ(VD/3DRD) and density, ρ: case 1 (Δ = -0.56ρ + 0.62, R(2) = 0.93), case 2 (Δ = -0.91ρ + 0.96, R(2) = 0.99), and case 3 (Δ = -0.69ρ + 0.72, R(2) = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (Δ(VDd/3DRD) < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the Δ(VDd/3DRD) range decreased for the 3 clinical cases (case 1, -1% to 4%; case 2, -0.5% to 1.5%, and -1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ - 0.38, R(2) = 0.88) although the slope in case 1 was less pronounced. CONCLUSION: This study shows a small influence of TDH in the abdominal region for 3 representative clinical cases. A simple density-correction method was proposed and improved the comparison in the absorbed dose calculations when using our voxel S value implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivation: The comparative analysis of gene gain and loss rates is critical for understanding the role of natural selection and adaptation in shaping gene family sizes. Studying complete genome data from closely related species allows accurate estimation of gene family turnover rates. Current methods and software tools, however, are not well designed for dealing with certain kinds of functional elements, such as microRNAs or transcription factor binding sites. Results: Here, we describe BadiRate, a new software tool to estimate family turnover rates, as well as the number of elements in internal phylogenetic nodes, by likelihood-based methods and parsimony. It implements two stochastic population models, which provide the appropriate statistical framework for testing hypothesis, such as lineage-specific gene family expansions or contractions. We have assessed the accuracy of BadiRate by computer simulations, and have also illustrated its functionality by analyzing a representative empirical dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contemporary coronary magnetic resonance angiography techniques suffer from signal-to-noise ratio (SNR) constraints. We propose a method to enhance SNR in gradient echo coronary magnetic resonance angiography by using sensitivity encoding (SENSE). While the use of sensitivity encoding to improve SNR seems counterintuitive, it can be exploited by reducing the number of radiofrequency excitations during the acquisition window while lowering the signal readout bandwidth, therefore improving the radiofrequency receive to radiofrequency transmit duty cycle. Under certain conditions, this leads to improved SNR. The use of sensitivity encoding for improved SNR in three-dimensional coronary magnetic resonance angiography is investigated using numerical simulations and an in vitro and an in vivo study. A maximum 55% SNR enhancement for coronary magnetic resonance angiography was found both in vitro and in vivo, which is well consistent with the numerical simulations. This method is most suitable for spoiled gradient echo coronary magnetic resonance angiography in which a high temporal and spatial resolution is required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methods used to analyze one type of nonstationary stochastic processes?the periodically correlated process?are considered. Two methods of one-step-forward prediction of periodically correlated time series are examined. One-step-forward predictions made in accordance with an autoregression model and a model of an artificial neural network with one latent neuron layer and with an adaptation mechanism of network parameters in a moving time window were compared in terms of efficiency. The comparison showed that, in the case of prediction for one time step for time series of mean monthly water discharge, the simpler autoregression model is more efficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work was to test methods for pre-harvest sprouting assessment in wheat cultivars. Fourteen wheat cultivars were grown in Londrina and Ponta Grossa municipalities, Paraná state, Brazil. They were sampled at 10 and 17 days after physiological maturity and evaluated using the methods of germination by rainfall simulation (in a greenhouse), in-ear grain sprouting, and grains removed from the ears. The in-ear grain sprouting method allowed the differentiation of cultivars, but showed different resistance levels from the available description of cultivars. The sprouting of grain removed from the ears did not allow a reliable distinction of data on germination in any harvest date or location. The method of rainfall simulation is the most suitable for the assessment of cultivars as to pre-harvest sprouting, regardless of the sampling date and evaluated location.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: In mice, a partial loss of function of the epithelial sodium channel (ENaC), which regulates sodium excretion in the distal nephron, causes pseudohypoaldosteronism, a salt-wasting syndrome. The purpose of the present experiments was to examine how alpha ENaC knockout heterozygous (+/-) mice, which have only one allele of the gene encoding for the alpha subunit of ENaC, control their blood pressure (BP) and sodium balance. METHODS: BP, urinary electrolyte excretion, plasma renin activity, and urinary adosterone were measured in wild-type (+/+) and heterozygous (+/-) mice on a low, regular, or high sodium diet. In addition, the BP response to angiotensin II (Ang II) and to Ang II receptor blockade, and the number and affinity of Ang II subtype 1 (AT1) receptors in renal tissue were analyzed in both mouse strains on the three diets. RESULTS: In comparison with wild-type mice (+/+), alpha ENaC heterozygous mutant mice (+/-) showed an intact capacity to maintain BP and sodium balance when studied on different sodium diets. However, no change in plasma renin activity was found in response to changes in sodium intake in alpha ENaC +/- mice. On a normal salt diet, heterozygous mice had an increased vascular responsiveness to exogenous Ang II (P < 0.01). Moreover, on a normal and low sodium intake, these mice exhibited an increase in the number of AT1 receptors in renal tissues; their BP lowered markedly during the Ang II receptor blockade (P < 0.01) and there was a clear tendency for an increase in urinary aldosterone excretion. CONCLUSIONS: alpha ENaC heterozygous mice have developed an unusual mechanism of compensation leading to an activation of the renin-angiotensin system, that is, the up-regulation of AT1 receptors. This up-regulation may be due to an increase in aldosterone production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Massively parallel signature sequencing (MPSS) generates millions of short sequence tags corresponding to transcripts from a single RNA preparation. Most MPSS tags can be unambiguously assigned to genes, thereby generating a comprehensive expression profile of the tissue of origin. From the comparison of MPSS data from 32 normal human tissues, we identified 1,056 genes that are predominantly expressed in the testis. Further evaluation by using MPSS tags from cancer cell lines and EST data from a wide variety of tumors identified 202 of these genes as candidates for encoding cancer/testis (CT) antigens. Of these genes, the expression in normal tissues was assessed by RT-PCR in a subset of 166 intron-containing genes, and those with confirmed testis-predominant expression were further evaluated for their expression in 21 cancer cell lines. Thus, 20 CT or CT-like genes were identified, with several exhibiting expression in five or more of the cancer cell lines examined. One of these genes is a member of a CT gene family that we designated as CT45. The CT45 family comprises six highly similar (>98% cDNA identity) genes that are clustered in tandem within a 125-kb region on Xq26.3. CT45 was found to be frequently expressed in both cancer cell lines and lung cancer specimens. Thus, MPSS analysis has resulted in a significant extension of our knowledge of CT antigens, leading to the discovery of a distinctive X-linked CT-antigen gene family.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Health professionals and policymakers aspire to make healthcare decisions based on the entire relevant research evidence. This, however, can rarely be achieved because a considerable amount of research findings are not published, especially in case of 'negative' results - a phenomenon widely recognized as publication bias. Different methods of detecting, quantifying and adjusting for publication bias in meta-analyses have been described in the literature, such as graphical approaches and formal statistical tests to detect publication bias, and statistical approaches to modify effect sizes to adjust a pooled estimate when the presence of publication bias is suspected. An up-to-date systematic review of the existing methods is lacking. METHODS/DESIGN: The objectives of this systematic review are as follows:âeuro¢ To systematically review methodological articles which focus on non-publication of studies and to describe methods of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses.âeuro¢ To appraise strengths and weaknesses of methods, the resources they require, and the conditions under which the method could be used, based on findings of included studies.We will systematically search Web of Science, Medline, and the Cochrane Library for methodological articles that describe at least one method of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses. A dedicated data extraction form is developed and pilot-tested. Working in teams of two, we will independently extract relevant information from each eligible article. As this will be a qualitative systematic review, data reporting will involve a descriptive summary. DISCUSSION: Results are expected to be publicly available in mid 2013. This systematic review together with the results of other systematic reviews of the OPEN project (To Overcome Failure to Publish Negative Findings) will serve as a basis for the development of future policies and guidelines regarding the assessment and handling of publication bias in meta-analyses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: La prévalence de la «non-alcoholic fatty liver disease (NAFLD)» dans les pays industrialisés augment de manière exponentielle. La NAFLD se développe d'une simple stéatose hépatique jusqu'à l'hépatite, puis à la cirrhose. De plus, la stéatose hépatique est fréquemment accompagnée par une résistance à l'insuline, une des causes principales du diabète. Les lipides intermédiaires, tels que céramides et diacylglycérols, ont été décrits comme induisant la résistance à l'insuline. Cependant, nous avons démontré dans notre modèle de stéatose hépatique, que les souris présentant une invalidation de la protéine «microsomal triglyceride transfer protein» (Mtpp) au niveau hépatique, ne développent pas de résistance à l'insuline. Ceci suggère fortement l'existence d'autres mécanismes susceptibles d'induire la résistance à l'insuline. Résultats: Grâce à une analyse de Microarray, nous avons observé une augmentation de l'expression des gènes «cell-death inducing DFFA-like effector c (CIDEC)», «lipid storage droplet protein 5 (LSDP5)» et «Bernardinelli-Seip congenital lipodystrophy 2 homolog (Seipin)» dans le foie des souris Mttp. Ces gènes ont récemment été identifiés comme des protéines localisées autour des gouttelettes lipidiques. Nous avons également constaté que la souris Mttp développe plutôt une microstéatose (petites gouttelettes lipidiques) qu'une macrostéatose qui est normalement observée chez les patients avec NAFLD. Nous avons étudié l'expression des gènes associés aux gouttelettes lipidiques chez les patients obèses avec stéatose hépatique, avec ou sans résistance à l'insuline. Comparés aux sujets sains sans stéatose hépatique, les patients avec la stéatose ont une expression significativement plus élevée. De manière intéressante, les patients avec résistance à l'insuline ont une diminution de ces expressions. Conclusion : Ces données suggèrent que les gènes des gouttelettes lipidiques sont impliqués dans le développement de la stéatose hépatique chez l'homme et peut-être contribue à la mise en place de la résistance à l'insuline.