856 resultados para Population set-based methods
Resumo:
PhD thesis in Biomedical Engineering
Resumo:
OBJECTIVE: To establish the allelic and genotypic frequencies related to apolipoprotein E (ApoE) polymorphism and association of the genotypes with risk factors and cardiovascular morbidity in an elderly population with longevity. METHODS: We analyzed 70 elderly patients aged 80 years or more who were part of the Projeto Veranópolis. We used the gene amplification technique through the polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) and cleavage with the restriction enzyme Hha I to identify the ApoE genotypes. The most frequent genotypes were compared considering biological variables and cardiovascular risks and morbidity. RESULTS: The frequencies of the E2, E3, and E4 alleles were 0.05, 0.84, and 0.11, respectively, and of the genotypes were as follows: E3E3 (0.70), E3E4 (0.22), E2E3 (0.06), and E2E2 (0.02). Individuals with the E3E4 had a mean age greater than those with the E3E3. No association was observed between the genotypes and the variables analyzed, except for obesity, which was associated with the E3E3 genotype. Individuals with the E3E4 genotype had high levels of LDL-cholesterol and fibrinogen as compared with those with the E3E3 genotype. CONCLUSION: The results suggest that the E4E4 genotype may be associated with early mortality. A balance between the protective or neutral factors and the cardiovascular risk factors may occur among the individuals with different genotypes, attenuating the negative effects of the E4 allele.
Resumo:
Objective: The aim of this study was to determine the smallest changes in health-related quality of life (HRQOL) scores in the European Organization for Research and Treatment of Cancer quality of life questionnaire (EORTC QLQ-C30) and the EORTC Brain Cancer Module (QLQ-BN20), which could be considered as clinically meaningful in brain cancer patients. Methods: World Health Organization (WHO) performance status (PS) and the Mini Mental State Examination (MMSE) were used as clinical anchors to determine minimal clinically important differences (MCID) in HRQOL change scores (range 0 - 100) in the EORTC QLQ-C30 and QLQ-BN20. Anchor-based MCID estimates less than 0.2SD (small effect) were not recommended for interpretation. Other selected distribution-based methods were also used for comparison purposes. Results: Based on WHO PS, our findings support the following whole number estimates of the MCID for improvement and deterioration respectively: physical functioning (6, 9), role functioning (14, 12), cognitive functioning (8, 8), global health status (7, 4*), fatigue (12, 9) and motor dysfunction (4*, 5). Anchoring with MMSE, cognitive functioning MCID estimates for improvement and deterioration were (11, 2*) and those for communication deficit were (9, 7). The estimates with asterisks were less that the set 0.2 SD threshold and are therefore not recommended for interpretation. Our MCID estimates therefore range from 5-14. Conclusion: These estimates can help clinicians to evaluate changes in HRQOL over time and, in conjunction with other measures of efficacy, help to assess the value of a health care intervention or to compare treatments. Furthermore, the estimates can be useful in determining sample sizes in the design of future clinical trials.
Resumo:
About one third of the world population is infected with tubercle bacilli, causing eight million new cases of tuberculosis (TB) and three million deaths each year. After years of lack of interest in the disease, World Health Organization recently declared TB a global emergency and it is clear that there is need for more efficient national TB programs and newly defined research priorities. A more complete epidemiology of tuberculosis will lead to a better identification of index cases and to a more efficient treatment of the disease. Recently, new molecular tools became available for the identification of strains of Mycobacterium tuberculosis (M. tuberculosis), allowing a better recognition of transmission routes of defined strains. Both a standardized restriction-fragment-length-polymorphism-based methodology for epidemiological studies on a large scale and deoxyribonucleic acids (DNA) amplification-based methods that allow rapid detection of outbreaks with multidrug-resistant (MDR) strains, often characterized by high mortality rates, have been developed. This review comments on the existing methods of DNA-based recognition of M. tuberculosis strains and their peculiarities. It also summarizes literature data on the application of molecular fingerprinting for detection of outbreaks of M. tuberculosis, for identification of index cases, for study of interaction between TB and infection with the human immunodeficiency virus, for analysis of the behavior of MDR strains, for a better understanding of risk factors for transmission of TB within communities and for population-based studies of TB transmission within and between countries
Resumo:
Abstract : Copy number variation (CNV) of DNA segments has recently gained considerable interest as a source of genetic variation likely to play a role in phenotypic diversity and evolution. Much effort has been put into the identification and mapping of regions that vary in copy number among seemingly normal individuals, both in humans and in a number of model organisms, using both bioinformatic and hybridization-based methods. Synteny studies suggest the existence of CNV hotspots in mammalian genomes, often in connection with regions of segmental duplication. CNV alleles can be in equilibrium within a population, but can also arise de novo between generations, illustrating the highly dynamic nature of these regions. A small number of studies have assessed the effect of CNV on single loci, however, at the genome-wide scale, the functional impact of CNV remains poorly studied. We have explored the influence of CNV on gene expression, first using the Williams-Beuren syndrome (WBS) associated deletion as a model, and second at the genome-wide scale in inbred mouse strains. We found that the WBS deletion influences the expression levels not only of the hemizygous genes, but also affects the euploid genes mapping nearby. Consistently, on a genome wide scale we observe that CNV genes are expressed at more variable levels than genes that do not vary in copy number. Likewise, CNVs influence the relative expression levels of genes that map to the flank of the genome rearrangements, thus globally influencing tissue transcriptomes. Further studies are warranted to complete cataloguing and fine mapping of CNV regions, as well as to elucidate the different mechanisms by which CNVs influence gene expression. Résumé : La variation en nombre de copies (copy number variation ou CNV) de segments d'ADN suscite un intérêt en tant que variation génétique susceptible de jouer un r81e dans la diversité phénotypique et l'évolution. Les régions variables en nombre de copies parmi des individus apparemment normaux ont été cartographiées et cataloguées au moyen de puces à ADN et d'analyse bioinformatique. L'étude de la synténie entre plusieurs espèces de mammifères laisse supposer l'existence de régions à haut taux de variation, souvent liées à des duplications segmentaires. Les allèles CNV peuvent être en équilibre au sein d'une population ou peuvent apparaître de novo. Ces faits illustrent la nature hautement dynamique de ces régions. Quelques études se sont penchées sur l'effet de la variation en nombre de copies de loci isolés, cependant l'impact de ce phénomène n'a pas été étudié à l'échelle génomique. Nous avons examiné l'influence des CNV sur l'expression des gènes. Dans un premier temps nous avons utilisé la délétion associée au syndrome de Williams-Beuren (WBS), puis, dans un second temps, nous avons poursuivi notre étude à l'échelle du génome, dans des lignées consanguines de souris. Nous avons établi que la délétion WBS influence l'expression non seulement des gènes hémizygotes, mais également celle des gènes euploïdes voisins. A l'échelle génomique, nous observons des phénomènes concordants. En effet, l'expression des gènes variant en nombre de copies est plus variable que celles des gènes ne variant pas. De plus, à l'instar de la délétion WBS, les CNV influencent l'expression des gènes adjacents, exerçant ainsi un impact global sur les profils d'expression dans les tissus. Résumé pour un large public : De nombreuses maladies ont pour cause un défaut génétique. Parmi les types de mutations, on compte la disparition (délétion) d'une partie de notre génome ou sa duplication. Bien que l'on connaisse les anomalies associées à certaines maladies, les mécanismes moléculaires par lesquels ces réarrangements de notre matériel génétique induisent les maladies sont encore méconnus. C'est pourquoi nous nous sommes intéressés à la régulation des gènes dans les régions susceptibles à délétion ou duplication. Dans ce travail, nous avons démontré que les délétions et les duplications influencent la régulation des gènes situés à proximité, et que ces changements interviennent dans plusieurs organes.
Resumo:
Summary: Global warming has led to an average earth surface temperature increase of about 0.7 °C in the 20th century, according to the 2007 IPCC report. In Switzerland, the temperature increase in the same period was even higher: 1.3 °C in the Northern Alps anal 1.7 °C in the Southern Alps. The impacts of this warming on ecosystems aspecially on climatically sensitive systems like the treeline ecotone -are already visible today. Alpine treeline species show increased growth rates, more establishment of young trees in forest gaps is observed in many locations and treelines are migrating upwards. With the forecasted warming, this globally visible phenomenon is expected to continue. This PhD thesis aimed to develop a set of methods and models to investigate current and future climatic treeline positions and treeline shifts in the Swiss Alps in a spatial context. The focus was therefore on: 1) the quantification of current treeline dynamics and its potential causes, 2) the evaluation and improvement of temperaturebased treeline indicators and 3) the spatial analysis and projection of past, current and future climatic treeline positions and their respective elevational shifts. The methods used involved a combination of field temperature measurements, statistical modeling and spatial modeling in a geographical information system. To determine treeline shifts and assign the respective drivers, neighborhood relationships between forest patches were analyzed using moving window algorithms. Time series regression modeling was used in the development of an air-to-soil temperature transfer model to calculate thermal treeline indicators. The indicators were then applied spatially to delineate the climatic treeline, based on interpolated temperature data. Observation of recent forest dynamics in the Swiss treeline ecotone showed that changes were mainly due to forest in-growth, but also partly to upward attitudinal shifts. The recent reduction in agricultural land-use was found to be the dominant driver of these changes. Climate-driven changes were identified only at the uppermost limits of the treeline ecotone. Seasonal mean temperature indicators were found to be the best for predicting climatic treelines. Applying dynamic seasonal delimitations and the air-to-soil temperature transfer model improved the indicators' applicability for spatial modeling. Reproducing the climatic treelines of the past 45 years revealed regionally different attitudinal shifts, the largest being located near the highest mountain mass. Modeling climatic treelines based on two IPCC climate warming scenarios predicted major shifts in treeline altitude. However, the currently-observed treeline is not expected to reach this limit easily, due to lagged reaction, possible climate feedback effects and other limiting factors. Résumé: Selon le rapport 2007 de l'IPCC, le réchauffement global a induit une augmentation de la température terrestre de 0.7 °C en moyenne au cours du 20e siècle. En Suisse, l'augmentation durant la même période a été plus importante: 1.3 °C dans les Alpes du nord et 1.7 °C dans les Alpes du sud. Les impacts de ce réchauffement sur les écosystèmes - en particuliers les systèmes sensibles comme l'écotone de la limite des arbres - sont déjà visibles aujourd'hui. Les espèces de la limite alpine des forêts ont des taux de croissance plus forts, on observe en de nombreux endroits un accroissement du nombre de jeunes arbres s'établissant dans les trouées et la limite des arbres migre vers le haut. Compte tenu du réchauffement prévu, on s'attend à ce que ce phénomène, visible globalement, persiste. Cette thèse de doctorat visait à développer un jeu de méthodes et de modèles pour étudier dans un contexte spatial la position présente et future de la limite climatique des arbres, ainsi que ses déplacements, au sein des Alpes suisses. L'étude s'est donc focalisée sur: 1) la quantification de la dynamique actuelle de la limite des arbres et ses causes potentielles, 2) l'évaluation et l'amélioration des indicateurs, basés sur la température, pour la limite des arbres et 3) l'analyse spatiale et la projection de la position climatique passée, présente et future de la limite des arbres et des déplacements altitudinaux de cette position. Les méthodes utilisées sont une combinaison de mesures de température sur le terrain, de modélisation statistique et de la modélisation spatiale à l'aide d'un système d'information géographique. Les relations de voisinage entre parcelles de forêt ont été analysées à l'aide d'algorithmes utilisant des fenêtres mobiles, afin de mesurer les déplacements de la limite des arbres et déterminer leurs causes. Un modèle de transfert de température air-sol, basé sur les modèles de régression sur séries temporelles, a été développé pour calculer des indicateurs thermiques de la limite des arbres. Les indicateurs ont ensuite été appliqués spatialement pour délimiter la limite climatique des arbres, sur la base de données de températures interpolées. L'observation de la dynamique forestière récente dans l'écotone de la limite des arbres en Suisse a montré que les changements étaient principalement dus à la fermeture des trouées, mais aussi en partie à des déplacements vers des altitudes plus élevées. Il a été montré que la récente déprise agricole était la cause principale de ces changements. Des changements dus au climat n'ont été identifiés qu'aux limites supérieures de l'écotone de la limite des arbres. Les indicateurs de température moyenne saisonnière se sont avérés le mieux convenir pour prédire la limite climatique des arbres. L'application de limites dynamiques saisonnières et du modèle de transfert de température air-sol a amélioré l'applicabilité des indicateurs pour la modélisation spatiale. La reproduction des limites climatiques des arbres durant ces 45 dernières années a mis en évidence des changements d'altitude différents selon les régions, les plus importants étant situés près du plus haut massif montagneux. La modélisation des limites climatiques des arbres d'après deux scénarios de réchauffement climatique de l'IPCC a prédit des changements majeurs de l'altitude de la limite des arbres. Toutefois, l'on ne s'attend pas à ce que la limite des arbres actuellement observée atteigne cette limite facilement, en raison du délai de réaction, d'effets rétroactifs du climat et d'autres facteurs limitants.
Resumo:
Background. During the last few years, PCR-based methods have been developed to simplify and reduce the time required for genotyping Mycobacterium tuberculosis (MTB) by standard approaches based on IS6110-Restriction Fragment Length Polymorphism (RFLP). Of these, MIRU-12-VNTR (Mycobacterial interspersed repetitive units- variable number of tandem repeats) (MIRU-12) has been considered a good alternative. Nevertheless, some limitations and discrepancies with RFLP, which are minimized if the technique is complemented with spoligotyping, have been found. Recently, a new version of MIRU-VNTR targeting 15 loci (MIRU-15) has been proposed to improve the MIRU-12 format. Results. We evaluated the new MIRU-15 tool in two different samples. First, we analyzed the same convenience sample that had been used to evaluate MIRU-12 in a previous study, and the new 15-loci version offered higher discriminatory power (Hunter-Gaston discriminatory index [HGDI]: 0.995 vs 0.978; 34.4% of clustered cases vs 57.5%) and better correlation (full or high correlation with RFLP for 82% of the clusters vs 47%). Second, we evaluated MIRU-15 on a population-based sample and, once again, good correlation with the RFLP clustering data was observed (for 83% of the RFLP clusters). To understand the meaning of the discrepancies still found between MIRU-15 and RFLP, we analyzed the epidemiological data for the clustered patients. In most cases, splitting of RFLP-clustered patients by MIRU-15 occurred for those without epidemiological links, and RFLP-clustered patients with epidemiological links were also clustered by MIRU-15, suggesting a good epidemiological background for clustering defined by MIRU-15. Conclusion. The data obtained by MIRU-15 suggest that the new design is very efficient at assigning clusters confirmed by epidemiological data. If we add this to the speed with which it provides results, MIRU-15 could be considered a suitable tool for real-time genotyping.
Resumo:
Stable isotope labels are routinely introduced into proteomes for quantification purposes. Full labeling of cells in varying biological states, followed by sample mixing, fractionation and intensive data acquisition, is used to obtain accurate large-scale quantification of total protein levels. However, biological processes often affect only a small group of proteins for a short time, resulting in changes that are difficult to detect against the total proteome background. An alternative approach could be the targeted analysis of the proteins synthesized in response to a given biological stimulus. Such proteins can be pulse-labeled with a stable isotope by metabolic incorporation of 'heavy' amino acids. In this study we investigated the specific detection and identification of labeled proteins using acquisition methods based on Precursor Ion Scans (PIS) on a triple-quadrupole ion trap mass spectrometer. PIS-based methods were set to detect unique immonium ions originating from labeled peptides. Different labels and methods were tested in standard mixtures to optimize performance. We showed that, in comparison with an untargeted analysis on the same instrument, the approach allowed a several-fold increase in the specificity of detection of labeled proteins over unlabeled ones. The technique was applied to the identification of proteins secreted by human cells into growth media containing bovine serum proteins, allowing the preferential detection of labeled cellular proteins over unlabeled bovine ones. However, compared with untargeted acquisitions on two different instruments, the PIS-based strategy showed some limitations in sensitivity. We discuss possible perspectives of the technique.
Resumo:
BACKGROUND Most textbooks contains messages relating to health. This profuse information requires analysis with regards to the quality of such information. The objective was to identify the scientific evidence on which the health messages in textbooks are based. METHODS The degree of evidence on which such messages are based was identified and the messages were subsequently classified into three categories: Messages with high, medium or low levels of evidence; Messages with an unknown level of evidence; and Messages with no known evidence. RESULTS 844 messages were studied. Of this total, 61% were classified as messages with an unknown level of evidence. Less than 15% fell into the category where the level of evidence was known and less than 6% were classified as possessing high levels of evidence. More than 70% of the messages relating to "Balanced Diets and Malnutrition", "Food Hygiene", "Tobacco", "Sexual behaviour and AIDS" and "Rest and ergonomics" are based on an unknown level of evidence. "Oral health" registered the highest percentage of messages based on a high level of evidence (37.5%), followed by "Pregnancy and newly born infants" (35%). Of the total, 24.6% are not based on any known evidence. Two of the messages appeared to contravene known evidence. CONCLUSION Many of the messages included in school textbooks are not based on scientific evidence. Standards must be established to facilitate the production of texts that include messages that are based on the best available evidence and which can improve children's health more effectively.
Resumo:
High-resolution tomographic imaging of the shallow subsurface is becoming increasingly important for a wide range of environmental, hydrological and engineering applications. Because of their superior resolution power, their sensitivity to pertinent petrophysical parameters, and their far reaching complementarities, both seismic and georadar crosshole imaging are of particular importance. To date, corresponding approaches have largely relied on asymptotic, ray-based approaches, which only account for a very small part of the observed wavefields, inherently suffer from a limited resolution, and in complex environments may prove to be inadequate. These problems can potentially be alleviated through waveform inversion. We have developed an acoustic waveform inversion approach for crosshole seismic data whose kernel is based on a finite-difference time-domain (FDTD) solution of the 2-D acoustic wave equations. This algorithm is tested on and applied to synthetic data from seismic velocity models of increasing complexity and realism and the results are compared to those obtained using state-of-the-art ray-based traveltime tomography. Regardless of the heterogeneity of the underlying models, the waveform inversion approach has the potential of reliably resolving both the geometry and the acoustic properties of features of the size of less than half a dominant wavelength. Our results do, however, also indicate that, within their inherent resolution limits, ray-based approaches provide an effective and efficient means to obtain satisfactory tomographic reconstructions of the seismic velocity structure in the presence of mild to moderate heterogeneity and in absence of strong scattering. Conversely, the excess effort of waveform inversion provides the greatest benefits for the most heterogeneous, and arguably most realistic, environments where multiple scattering effects tend to be prevalent and ray-based methods lose most of their effectiveness.
Resumo:
Background: We address the problem of studying recombinational variations in (human) populations. In this paper, our focus is on one computational aspect of the general task: Given two networks G1 and G2, with both mutation and recombination events, defined on overlapping sets of extant units the objective is to compute a consensus network G3 with minimum number of additional recombinations. We describe a polynomial time algorithm with a guarantee that the number of computed new recombination events is within ϵ = sz(G1, G2) (function sz is a well-behaved function of the sizes and topologies of G1 and G2) of the optimal number of recombinations. To date, this is the best known result for a network consensus problem.Results: Although the network consensus problem can be applied to a variety of domains, here we focus on structure of human populations. With our preliminary analysis on a segment of the human Chromosome X data we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. These results have been verified independently using traditional manual procedures. To the best of our knowledge, this is the first recombinations-based characterization of human populations. Conclusion: We show that our mathematical model identifies recombination spots in the individual haplotypes; the aggregate of these spots over a set of haplotypes defines a recombinational landscape that has enough signal to detect continental as well as population divide based on a short segment of Chromosome X. In particular, we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. The agreement with mutation-based analysis can be viewed as an indirect validation of our results and the model. Since the model in principle gives us more information embedded in the networks, in our future work, we plan to investigate more non-traditional questions via these structures computed by our methodology.
Resumo:
Demosaicking is a particular case of interpolation problems where, from a scalar image in which each pixel has either the red, the green or the blue component, we want to interpolate the full-color image. State-of-the-art demosaicking algorithms perform interpolation along edges, but these edges are estimated locally. We propose a level-set-based geometric method to estimate image edges, inspired by the image in-painting literature. This method has a time complexity of O(S) , where S is the number of pixels in the image, and compares favorably with the state-of-the-art algorithms both visually and in most relevant image quality measures.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
This paper shows how recently developed regression-based methods for thedecomposition of health inequality can be extended to incorporateindividual heterogeneity in the responses of health to the explanatoryvariables. We illustrate our method with an application to the CanadianNPHS of 1994. Our strategy for the estimation of heterogeneous responsesis based on the quantile regression model. The results suggest that thereis an important degree of heterogeneity in the association of health toexplanatory variables which, in turn, accounts for a substantial percentageof inequality in observed health. A particularly interesting finding isthat the marginal response of health to income is zero for healthyindividuals but positive and significant for unhealthy individuals. Theheterogeneity in the income response reduces both overall health inequalityand income related health inequality.
Resumo:
BACKGROUND: Hyperoxaluria is a major risk factor for kidney stone formation. Although urinary oxalate measurement is part of all basic stone risk assessment, there is no standardized method for this measurement. METHODS: Urine samples from 24-h urine collection covering a broad range of oxalate concentrations were aliquoted and sent, in duplicates, to six blinded international laboratories for oxalate, sodium and creatinine measurement. In a second set of experiments, ten pairs of native urine and urine spiked with 10 mg/L of oxalate were sent for oxalate measurement. Three laboratories used a commercially available oxalate oxidase kit, two laboratories used a high-performance liquid chromatography (HPLC)-based method and one laboratory used both methods. RESULTS: Intra-laboratory reliability for oxalate measurement expressed as intraclass correlation coefficient (ICC) varied between 0.808 [95% confidence interval (CI): 0.427-0.948] and 0.998 (95% CI: 0.994-1.000), with lower values for HPLC-based methods. Acidification of urine samples prior to analysis led to significantly higher oxalate concentrations. ICC for inter-laboratory reliability varied between 0.745 (95% CI: 0.468-0.890) and 0.986 (95% CI: 0.967-0.995). Recovery of the 10 mg/L oxalate-spiked samples varied between 8.7 ± 2.3 and 10.7 ± 0.5 mg/L. Overall, HPLC-based methods showed more variability compared to the oxalate oxidase kit-based methods. CONCLUSIONS: Significant variability was noted in the quantification of urinary oxalate concentration by different laboratories, which may partially explain the differences of hyperoxaluria prevalence reported in the literature. Our data stress the need for a standardization of the method of oxalate measurement.