968 resultados para Random Number of Ancestors
Resumo:
OBJECTIVE: We examined the correlation between clinical wear rates of restorative materials and enamel (TRAC Research Foundation, Provo, USA) and the results of six laboratory test methods (ACTA, Alabama (generalized, localized), Ivoclar (vertical, volumetric), Munich, OHSU (abrasion, attrition), Zurich). METHODS: Individual clinical wear data were available from clinical trials that were conducted by TRAC Research Foundation (formerly CRA) together with general practitioners. For each of the n=28 materials (21 composite resins for intra-coronal restorations [20 direct and 1 indirect], 5 resin materials for crowns, 1 amalgam, enamel) a minimum of 30 restorations had been placed in posterior teeth, mainly molars. The recall intervals were up to 5 years with the majority of materials (n=27) being monitored, however, only for up to 2 years. For the laboratory data, the databases MEDLINE and IADR abstracts were searched for wear data on materials which were also clinically tested by TRAC Research Foundation. Only those data for which the same test parameters (e.g. number of cycles, loading force, type of antagonist) had been published were included in the study. A different quantity of data was available for each laboratory method: Ivoclar (n=22), Zurich (n=20), Alabama (n=17), OHSU and ACTA (n=12), Munich (n=7). The clinical results were summed up in an index and a linear mixed model was fitted to the log wear measurements including the following factors: material, time (0.5, 1, 2 and 3 years), tooth (premolar/molar) and gender (male/female) as fixed effects, and patient as random effect. Relative ranks were created for each material and method; the same was performed with the clinical results. RESULTS: The mean age of the subjects was 40 (±12) years. The materials had been mostly applied in molars (81%) and 95% of the intracoronal restorations were Class II restorations. The mean number of individual wear data per material was 25 (range 14-42). The mean coefficient of variation of clinical wear data was 53%. The only significant correlation was reached by OHSU (abrasion) with a Spearman r of 0.86 (p=0.001). Zurich, ACTA, Alabama generalized wear and Ivoclar (volume) had correlation coefficients between 0.3 and 0.4. For Zurich, Alabama generalized wear and Munich, the correlation coefficient improved if only composites for direct use were taken into consideration. The combination of different laboratory methods did not significantly improve the correlation. SIGNIFICANCE: The clinical wear of composite resins is mainly dependent on differences between patients and less on the differences between materials. Laboratory methods to test conventional resins for wear are therefore less important, especially since most of them do not reflect the clinical wear.
Resumo:
Although the knowledge on heavy metal hyperaccumulation mechanisms is increasing, the genetic basis of cadmium (Cd) hyperaccurnulation remains to be elucidated. Thlaspi caerulescens is an attractive model since Cd accumulation polymorphism observed in this species suggests genetic differences between populations with low versus high Cd hyperaccumulation capacities. In our study, a methodology is proposed to analyse at a regional scale the genetic differentiation of T. caerulescens natural populations in relation to Cd hyperaccumulation capacity while controlling for different environmental, soil, plant parameters and geographic origins of populations. Twenty-two populations were characterised with AFLP markers and cpDNA polymorphism. Over all loci, a partial Mantel test showed no significant genetic structure with regard to the Cd hyperaccumulation capacity. Nevertheless, when comparing the marker variation to a neutral model, seven AFLP fragments (9% of markers) were identified as presenting particularly high genetic differentiation between populations with low and high Cd hyperaccurnulation capacity. Using simulations, the number of outlier loci was showed to be significantly higher than expected at random. These loci presented a genetic structure linked to Cd hyperaccumulation capacity independently of the geography, environment, soil parameters and Zn, Pb, Fe and Cu concentrations in plants. Using a canonical correspondence analysis, we identified three of them as particularly related to the Cd hyperaccumutation capacity. This study demonstrates that populations with low and high hyperaccurnulation capacities can be significantly distinguished based on molecular data. Further investigations with candidate genes and mapped markers may allow identification and characterization of genomic regions linked to factors involved in Cd hyperaccumulation.
Resumo:
Mitochondrial (M) and lipid droplet (L) volume density (vd) are often used in exercise research. Vd is the volume of muscle occupied by M and L. The means of calculating these percents are accomplished by applying a grid to a 2D image taken with transmission electron microscopy; however, it is not known which grid best predicts these values. PURPOSE: To determine the grid with the least variability of Mvd and Lvd in human skeletal muscle. METHODS: Muscle biopsies were taken from vastus lateralis of 10 healthy adults, trained (N=6) and untrained (N=4). Samples of 5-10mg were fixed in 2.5% glutaraldehyde and embedded in EPON. Longitudinal sections of 60 nm were cut and 20 images were taken at random at 33,000x magnification. Vd was calculated as the number of times M or L touched two intersecting grid lines (called a point) divided by the total number of points using 3 different sizes of grids with squares of 1000x1000nm sides (corresponding to 1µm2), 500x500nm (0.25µm2) and 250x250nm (0.0625µm2). Statistics included coefficient of variation (CV), 1 way-BS ANOVA and spearman correlations. RESULTS: Mean age was 67 ± 4 yo, mean VO2peak 2.29 ± 0.70 L/min and mean BMI 25.1 ± 3.7 kg/m2. Mean Mvd was 6.39% ± 0.71 for the 1000nm squares, 6.01% ± 0.70 for the 500nm and 6.37% ± 0.80 for the 250nm. Lvd was 1.28% ± 0.03 for the 1000nm, 1.41% ± 0.02 for the 500nm and 1.38% ± 0.02 for the 250nm. The mean CV of the three grids was 6.65% ±1.15 for Mvd with no significant differences between grids (P>0.05). Mean CV for Lvd was 13.83% ± 3.51, with a significant difference between the 1000nm squares and the two other grids (P<0.05). The 500nm squares grid showed the least variability between subjects. Mvd showed a positive correlation with VO2peak (r = 0.89, p < 0.05) but not with weight, height, or age. No correlations were found with Lvd. CONCLUSION: Different size grids have different variability in assessing skeletal muscle Mvd and Lvd. The grid size of 500x500nm (240 points) was more reliable than 1000x1000nm (56 points). 250x250nm (1023 points) did not show better reliability compared with the 500x500nm, but was more time consuming. Thus, choosing a grid with square size of 500x500nm seems the best option. This is particularly relevant as most grids used in the literature are either 100 points or 400 points without clear information on their square size.
Resumo:
Urease is an important virulence factor for Helicobacter pylori and is critical for bacterial colonization of the human gastric mucosa. Specific inhibition of urease activity has been proposed as a possible strategy to fight this bacteria which infects billions of individual throughout the world and can lead to severe pathological conditions in a limited number of cases. We have selected peptides which specifically bind and inhibit H. pylori urease from libraries of random peptides displayed on filamentous phage in the context of pIII coat protein. Screening of a highly diverse 25-mer combinatorial library and two newly constructed random 6-mer peptide libraries on solid phase H. pylori urease holoenzyme allowed the identification of two peptides, 24-mer TFLPQPRCSALLRYLSEDGVIVPS and 6-mer YDFYWW that can bind and inhibit the activity of urease purified from H. pylori. These two peptides were chemically synthesized and their inhibition constants (Ki) were found to be 47 microM for the 24-mer and 30 microM for the 6-mer peptide. Both peptides specifically inhibited the activity of H. pylori urease but not that of Bacillus pasteurii.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
Approximate Quickselect, a simple modification of the well known Quickselect algorithm for selection, can be used to efficiently find an element with rank k in a given range [i..j], out of n given elements. We study basic cost measures of Approximate Quickselect by computing exact and asymptotic results for the expected number of passes, comparisons and data moves during the execution of this algorithm. The key element appearing in the analysis of Approximate Quickselect is a trivariate recurrence that we solve in full generality. The general solution of the recurrence proves to be very useful, as it allows us to tackle several related problems, besides the analysis that originally motivated us. In particular, we have been able to carry out a precise analysis of the expected number of moves of the ith element when selecting the jth smallest element with standard Quickselect, where we are able to give both exact and asymptotic results. Moreover, we can apply our general results to obtain exact and asymptotic results for several parameters in binary search trees, namely the expected number of common ancestors of the nodes with rank i and j, the expected size of the subtree rooted at the least common ancestor of the nodes with rank i and j, and the expected distance between the nodes of ranks i and j.
Resumo:
The usual way to investigate the statistical properties of finitely generated subgroups of free groups, and of finite presentations of groups, is based on the so-called word-based distribution: subgroups are generated (finite presentations are determined) by randomly chosen k-tuples of reduced words, whose maximal length is allowed to tend to infinity. In this paper we adopt a different, though equally natural point of view: we investigate the statistical properties of the same objects, but with respect to the so-called graph-based distribution, recently introduced by Bassino, Nicaud and Weil. Here, subgroups (and finite presentations) are determined by randomly chosen Stallings graphs whose number of vertices tends to infinity. Our results show that these two distributions behave quite differently from each other, shedding a new light on which properties of finitely generated subgroups can be considered frequent or rare. For example, we show that malnormal subgroups of a free group are negligible in the raph-based distribution, while they are exponentially generic in the word-based distribution. Quite surprisingly, a random finite presentation generically presents the trivial group in this new distribution, while in the classical one it is known to generically present an infinite hyperbolic group.
Resumo:
1. Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2. We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3. Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4. Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error.
Resumo:
BACKGROUND AND PURPOSE: To assess whether the combined analysis of all phase III trials of nonvitamin-K-antagonist (non-VKA) oral anticoagulants in patients with atrial fibrillation and previous stroke or transient ischemic attack shows a significant difference in efficacy or safety compared with warfarin. METHODS: We searched PubMed until May 31, 2012, for randomized clinical trials using the following search items: atrial fibrillation, anticoagulation, warfarin, and previous stroke or transient ischemic attack. Studies had to be phase III trials in atrial fibrillation patients comparing warfarin with a non-VKA currently on the market or with the intention to be brought to the market in North America or Europe. Analysis was performed on intention-to-treat basis. A fixed-effects model was used as more appropriate than a random-effects model when combining a small number of studies. RESULTS: Among 47 potentially eligible articles, 3 were included in the meta-analysis. In 14 527 patients, non-VKAs were associated with a significant reduction of stroke/systemic embolism (odds ratios, 0.85 [95% CI, 074-0.99]; relative risk reduction, 14%; absolute risk reduction, 0.7%; number needed to treat, 134 over 1.8-2.0 years) compared with warfarin. Non-VKAs were also associated with a significant reduction of major bleeding compared with warfarin (odds ratios, 0.86 [95% CI, 075-0.99]; relative risk reduction, 13%; absolute risk reduction, 0.8%; number needed to treat, 125), mainly driven by the significant reduction of hemorrhagic stroke (odds ratios, 0.44 [95% CI, 032-0.62]; relative risk reduction, 57.9%; absolute risk reduction, 0.7%; number needed to treat, 139). CONCLUSIONS: In the context of the significant limitations of combining the results of disparate trials of different agents, non-VKAs seem to be associated with a significant reduction in rates of stroke or systemic embolism, hemorrhagic stroke, and major bleeding when compared with warfarin in patients with previous stroke or transient ischemic attack.
Resumo:
BACKGROUND: Standard indicators of quality of care have been developed in the United States. Limited information exists about quality of care in countries with universal health care coverage.OBJECTIVE: To assess the quality of preventive care and care for cardiovascular risk factors in a country with universal health care coverage.DESIGN AND PARTICIPANTS: Retrospective cohort of a random sample of 1,002 patients aged 50-80 years followed for 2 years from all Swiss university primary care settings.MAIN MEASURES: We used indicators derived from RAND's Quality Assessment Tools. Each indicator was scored by dividing the number of episodes when recommended care was delivered by the number of times patients were eligible for indicators. Aggregate scores were calculated by taking into account the number of eligible patients for each indicator.KEY RESULTS: Overall, patients (44% women) received 69% of recommended preventive care, but rates differed by indicators. Indicators assessing annual blood pressure and weight measurements (both 95%) were more likely to be met than indicators assessing smoking cessation counseling (72%), breast (40%) and colon cancer screening (35%; all p < 0.001 for comparisons with blood pressure and weight measurements). Eighty-three percent of patients received the recommended care for cardiovascular risk factors, including > 75% for hypertension, dyslipidemia and diabetes. However, foot examination was performed only in 50% of patients with diabetes. Prevention indicators were more likely to be met in men (72.2% vs 65.3% in women, p < 0.001) and patients < 65 years (70.1% vs 68.0% in those a parts per thousand yen65 years, p = 0.047).CONCLUSIONS: Using standardized tools, these adults received 69% of recommended preventive care and 83% of care for cardiovascular risk factors in Switzerland, a country with universal coverage. Prevention indicator rates were lower for women and the elderly, and for cancer screening. Our study helps pave the way for targeted quality improvement initiatives and broader assessment of health care in Continental Europe.
Resumo:
Development of Schistosoma mansoni in the intermediate host Biomphalaria glabrata is influenced by a number of parasite and snail genes. Understanding the genetics involved in this complex host/parasite relationship may lead to an often discussed approach of introducing resistant B. glabrata into the field as a means of biological control for the parasite. For the snail, juvenile susceptibility to the parasite is controlled by at least four genes, whereas one gene seems to be responsible for adult nonsusceptibility. Obtaining DNA from F2 progeny snails from crosses between parasite-resistant and-susceptible snails, we have searched for molecular markers that show linkage to either the resistant or susceptible phenotype. Both restriction fragment length polymorphism (RFLP) and random amplified polymorphic DNA (RAPD) approaches have been used. To date, using a variety of snail and heterologous species probes, no RFLP marker has been found that segregates with either the resistant or susceptible phenotype in F2 progeny snails. More promising results however have been found with the RAPD approach, where a 1.3 kb marker appears in nearly all resistant progeny, and a 1.1 kb marker appears in all susceptible progeny
Resumo:
I study large random assignment economies with a continuum of agents and a finite number of object types. I consider the existence of weak priorities discriminating among agents with respect to their rights concerning the final assignment. The respect for priorities ex ante (ex-ante stability) usually precludes ex-ante envy-freeness. Therefore I define a new concept of fairness, called no unjustified lower chances: priorities with respect to one object type cannot justify different achievable chances regarding another object type. This concept, which applies to the assignment mechanism rather than to the assignment itself, implies ex-ante envy-freeness among agents of the same priority type. I propose a variation of Hylland and Zeckhauser' (1979) pseudomarket that meets ex-ante stability, no unjustified lower chances and ex-ante efficiency among agents of the same priority type. Assuming enough richness in preferences and priorities, the converse is also true: any random assignment with these properties could be achieved through an equilibrium in a pseudomarket with priorities. If priorities are acyclical (the ordering of agents is the same for each object type), this pseudomarket achieves ex-ante efficient random assignments.
Resumo:
The greenhead ant Rhytidoponera metallica has long been recognized as posing a potential challenge to kin selection theory because it has large queenless colonies where apparently many of the morphological workers are mated and reproducing. However this species has never been studied genetically and important elements of its breeding system and kin structure remain uncertain. We used microsatellite markers to measure the relatedness among nestmates unravel the fine-scale population genetic structure and infer the breeding system of R. metallica. The genetic relatedness among worker nestmates is very low but significantly greater than zero (r = 0.082 +/- 0.015) which demonstrates that nests contain many distantly related breeders. The inbreeding coefficient is very close to and not significantly different from zero indicating random mating and lack of microgeographic genetic differentiation. On average. closely located nests are not more similar genetically than distant nests which is surprising as new colonies form by budding and female dispersal is restricted. Lack of inbreeding and absence of population viscosity indicates high gene flow mediated by males. Overall the genetic pattern detected in R. metallica suggests that a high number of moderately related workers mate with unrelated males from distant nests. This breeding system results in the lowest relatedness among nestmates reported for social insect species where breeders and helpers are not morphologically differentiated. [References: 69]
Resumo:
This study aimed to obtain information on homeless people appearing before the courts and in custody in the Dublin Metropolitan area and to track and to determine how homeless persons progress through the court and prison systems. The overall objective was to provide information the Probation and Welfare Service's processes of policy formation, service development and planning. Findings on the number of homeless offenders, their profile, their progression routes into the criminal justice system and prisoner reintegration are presented. Recommendations are made regarding sentencing policy, agency responsibility for ex-prisoners and appropriate issues for discussion by the Cross Departmental Committee on Homelessness. It is also recommended that drug free units be available across all closed regime prison establishments.This resource was contributed by The National Documentation Centre on Drug Use.
Resumo:
Analyzing the relationship between the baseline value and subsequent change of a continuous variable is a frequent matter of inquiry in cohort studies. These analyses are surprisingly complex, particularly if only two waves of data are available. It is unclear for non-biostatisticians where the complexity of this analysis lies and which statistical method is adequate.With the help of simulated longitudinal data of body mass index in children,we review statistical methods for the analysis of the association between the baseline value and subsequent change, assuming linear growth with time. Key issues in such analyses are mathematical coupling, measurement error, variability of change between individuals, and regression to the mean. Ideally, it is better to rely on multiple repeated measurements at different times and a linear random effects model is a standard approach if more than two waves of data are available. If only two waves of data are available, our simulations show that Blomqvist's method - which consists in adjusting for measurement error variance the estimated regression coefficient of observed change on baseline value - provides accurate estimates. The adequacy of the methods to assess the relationship between the baseline value and subsequent change depends on the number of data waves, the availability of information on measurement error, and the variability of change between individuals.