17 resultados para exceedance probabilities
em National Center for Biotechnology Information - NCBI
Resumo:
Plasma levels of corticosterone are often used as a measure of “stress” in wild animal populations. However, we lack conclusive evidence that different stress levels reflect different survival probabilities between populations. Galápagos marine iguanas offer an ideal test case because island populations are affected differently by recurring El Niño famine events, and population-level survival can be quantified by counting iguanas locally. We surveyed corticosterone levels in six populations during the 1998 El Niño famine and the 1999 La Niña feast period. Iguanas had higher baseline and handling stress-induced corticosterone concentrations during famine than feast conditions. Corticosterone levels differed between islands and predicted survival through an El Niño period. However, among individuals, baseline corticosterone was only elevated when body condition dropped below a critical threshold. Thus, the population-level corticosterone response was variable but nevertheless predicted overall population health. Our results lend support to the use of corticosterone as a rapid quantitative predictor of survival in wild animal populations.
Resumo:
Li and Chakravarti [Li, C.C. & Chakravarti, A. (1994) Hum. Hered. 44, 100-109] compared the probability (MO) of a random match between the two DNA profiles of a pair of individuals drawn from a random-mating population to the probability (MF) of the match between a pair of random individuals drawn from a subdivided population. The level of heterogeneity in this subdivided population is measured by the parameter F, where there is no subdivision when F = 0 and increasing values of F indicate increasing subdivisions. Li and Chakravarti concluded that it is conservative to use the match probability MO, which is derived under the assumption that the two individuals are drawn from a homogeneous random-mating population without subdivision. However, MO may not be always greater than MF, even for biologically reasonable values of F. We explore here those mathematical conditions under which MO is less than MF, and we find that MO is not conservative mainly when there is an allele with a much higher frequency than all the other alleles. When empirical data for both variable number of tandem repeat (VNTR) and short tandem repeat (STR) systems are evaluated, we find that in the majority of cases MO represents a conservative probability of a match, and so the subdivision of human populations may usually be ignored for a random match, although not, of course, for relatives. Loci for which MO is not conservative should be avoided for forensic inference.
Resumo:
Understanding the relationship between animal community dynamics and landscape structure has become a priority for biodiversity conservation. In particular, predicting the effects of habitat destruction that confine species to networks of small patches is an important prerequisite to conservation plan development. Theoretical models that predict the occurrence of species in fragmented landscapes, and relationships between stability and diversity do exist. However, reliable empirical investigations of the dynamics of biodiversity have been prevented by differences in species detection probabilities among landscapes. Using long-term data sampled at a large spatial scale in conjunction with a capture-recapture approach, we developed estimates of parameters of community changes over a 22-year period for forest breeding birds in selected areas of the eastern United States. We show that forest fragmentation was associated not only with a reduced number of forest bird species, but also with increased temporal variability in the number of species. This higher temporal variability was associated with higher local extinction and turnover rates. These results have major conservation implications. Moreover, the approach used provides a practical tool for the study of the dynamics of biodiversity.
Resumo:
Two variables define the topological state of closed double-stranded DNA: the knot type, K, and ΔLk, the linking number difference from relaxed DNA. The equilibrium distribution of probabilities of these states, P(ΔLk, K), is related to two conditional distributions: P(ΔLk|K), the distribution of ΔLk for a particular K, and P(K|ΔLk) and also to two simple distributions: P(ΔLk), the distribution of ΔLk irrespective of K, and P(K). We explored the relationships between these distributions. P(ΔLk, K), P(ΔLk), and P(K|ΔLk) were calculated from the simulated distributions of P(ΔLk|K) and of P(K). The calculated distributions agreed with previous experimental and theoretical results and greatly advanced on them. Our major focus was on P(K|ΔLk), the distribution of knot types for a particular value of ΔLk, which had not been evaluated previously. We found that unknotted circular DNA is not the most probable state beyond small values of ΔLk. Highly chiral knotted DNA has a lower free energy because it has less torsional deformation. Surprisingly, even at |ΔLk| > 12, only one or two knot types dominate the P(K|ΔLk) distribution despite the huge number of knots of comparable complexity. A large fraction of the knots found belong to the small family of torus knots. The relationship between supercoiling and knotting in vivo is discussed.
Resumo:
Structural genomics aims to solve a large number of protein structures that represent the protein space. Currently an exhaustive solution for all structures seems prohibitively expensive, so the challenge is to define a relatively small set of proteins with new, currently unknown folds. This paper presents a method that assigns each protein with a probability of having an unsolved fold. The method makes extensive use of protomap, a sequence-based classification, and scop, a structure-based classification. According to protomap, the protein space encodes the relationship among proteins as a graph whose vertices correspond to 13,354 clusters of proteins. A representative fold for a cluster with at least one solved protein is determined after superposition of all scop (release 1.37) folds onto protomap clusters. Distances within the protomap graph are computed from each representative fold to the neighboring folds. The distribution of these distances is used to create a statistical model for distances among those folds that are already known and those that have yet to be discovered. The distribution of distances for solved/unsolved proteins is significantly different. This difference makes it possible to use Bayes' rule to derive a statistical estimate that any protein has a yet undetermined fold. Proteins that score the highest probability to represent a new fold constitute the target list for structural determination. Our predicted probabilities for unsolved proteins correlate very well with the proportion of new folds among recently solved structures (new scop 1.39 records) that are disjoint from our original training set.
Resumo:
Recent studies have demonstrated the importance of recipient HLA-DRB1 allele disparity in the development of acute graft-versus-host disease (GVHD) after unrelated donor marrow transplantation. The role of HLA-DQB1 allele disparity in this clinical setting is unknown. To elucidate the biological importance of HLA-DQB1, we conducted a retrospective analysis of 449 HLA-A, -B, and -DR serologically matched unrelated donor transplants. Molecular typing of HLA-DRB1 and HLA-DQB1 alleles revealed 335 DRB1 and DQB1 matched pairs; 41 DRB1 matched and DQB1 mismatched pairs; 48 DRB1 mismatched and DQB1 matched pairs; and 25 DRB1 and DQB1 mismatched pairs. The conditional probabilities of grades III-IV acute GVHD were 0.42, 0.61, 0.55, and 0.71, respectively. The relative risk of acute GVHD associated with a single locus HLA-DQB1 mismatch was 1.8 (1.1, 2.7; P = 0.01), and the risk associated with any HLA-DQB1 and/or HLA-DRB1 mismatch was 1.6 (1.2, 2.2; P = 0.003). These results provide evidence that HLA-DQ is a transplant antigen and suggest that evaluation of both HLA-DQB1 and HLA-DRB1 is necessary in selecting potential donors.
Resumo:
Plasma processing is a standard industrial method for the modification of material surfaces and the deposition of thin films. Polyatomic ions and neutrals larger than a triatomic play a critical role in plasma-induced surface chemistry, especially in the deposition of polymeric films from fluorocarbon plasmas. In this paper, low energy CF3+ and C3F5+ ions are used to modify a polystyrene surface. Experimental and computational studies are combined to quantify the effect of the unique chemistry and structure of the incident ions on the result of ion-polymer collisions. C3F5+ ions are more effective at growing films than CF3+, both at similar energy/atom of ≈6 eV/atom and similar total kinetic energies of 25 and 50 eV. The composition of the films grown experimentally also varies with both the structure and kinetic energy of the incident ion. Both C3F5+ and CF3+ should be thought of as covalently bound polyatomic precursors or fragments that can react and become incorporated within the polystyrene surface, rather than merely donating F atoms. The size and structure of the ions affect polymer film formation via differing chemical structure, reactivity, sticking probabilities, and energy transfer to the surface. The different reactivity of these two ions with the polymer surface supports the argument that larger species contribute to the deposition of polymeric films from fluorocarbon plasmas. These results indicate that complete understanding and accurate computer modeling of plasma–surface modification requires accurate measurement of the identities, number densities, and kinetic energies of higher mass ions and energetic neutrals.
Resumo:
The availability of complete genome sequences and mRNA expression data for all genes creates new opportunities and challenges for identifying DNA sequence motifs that control gene expression. An algorithm, “MobyDick,” is presented that decomposes a set of DNA sequences into the most probable dictionary of motifs or words. This method is applicable to any set of DNA sequences: for example, all upstream regions in a genome or all genes expressed under certain conditions. Identification of words is based on a probabilistic segmentation model in which the significance of longer words is deduced from the frequency of shorter ones of various lengths, eliminating the need for a separate set of reference data to define probabilities. We have built a dictionary with 1,200 words for the 6,000 upstream regulatory regions in the yeast genome; the 500 most significant words (some with as few as 10 copies in all of the upstream regions) match 114 of 443 experimentally determined sites (a significance level of 18 standard deviations). When analyzing all of the genes up-regulated during sporulation as a group, we find many motifs in addition to the few previously identified by analyzing the subclusters individually to the expression subclusters. Applying MobyDick to the genes derepressed when the general repressor Tup1 is deleted, we find known as well as putative binding sites for its regulatory partners.
Resumo:
Single-stranded regions in RNA secondary structure are important for RNA–RNA and RNA–protein interactions. We present a probability profile approach for the prediction of these regions based on a statistical algorithm for sampling RNA secondary structures. For the prediction of phylogenetically-determined single-stranded regions in secondary structures of representative RNA sequences, the probability profile offers substantial improvement over the minimum free energy structure. In designing antisense oligonucleotides, a practical problem is how to select a secondary structure for the target mRNA from the optimal structure(s) and many suboptimal structures with similar free energies. By summarizing the information from a statistical sample of probable secondary structures in a single plot, the probability profile not only presents a solution to this dilemma, but also reveals ‘well-determined’ single-stranded regions through the assignment of probabilities as measures of confidence in predictions. In antisense application to the rabbit β-globin mRNA, a significant correlation between hybridization potential predicted by the probability profile and the degree of inhibition of in vitro translation suggests that the probability profile approach is valuable for the identification of effective antisense target sites. Coupling computational design with DNA–RNA array technique provides a rational, efficient framework for antisense oligonucleotide screening. This framework has the potential for high-throughput applications to functional genomics and drug target validation.
Resumo:
Tranformed-rule up and down psychophysical methods have gained great popularity, mainly because they combine criterion-free responses with an adaptive procedure allowing rapid determination of an average stimulus threshold at various criterion levels of correct responses. The statistical theory underlying the methods now in routine use is based on sets of consecutive responses with assumed constant probabilities of occurrence. The response rules requiring consecutive responses prevent the possibility of using the most desirable response criterion, that of 75% correct responses. The earliest transformed-rule up and down method, whose rules included nonconsecutive responses, did not contain this limitation but failed to become generally accepted, lacking a published theoretical foundation. Such a foundation is provided in this article and is validated empirically with the help of experiments on human subjects and a computer simulation. In addition to allowing the criterion of 75% correct responses, the method is more efficient than the methods excluding nonconsecutive responses in their rules.
Resumo:
We have studied the HA1 domain of 254 human influenza A(H3N2) virus genes for clues that might help identify characteristics of hemagglutinins (HAs) of circulating strains that are predictive of that strain’s epidemic potential. Our preliminary findings include the following. (i) The most parsimonious tree found requires 1,260 substitutions of which 712 are silent and 548 are replacement substitutions. (ii) The HA1 portion of the HA gene is evolving at a rate of 5.7 nucleotide substitutions/year or 5.7 × 10−3 substitutions/site per year. (iii) The replacement substitutions are distributed randomly across the three positions of the codon when allowance is made for the number of ways each codon can change the encoded amino acid. (iv) The replacement substitutions are not distributed randomly over the branches of the tree, there being 2.2 times more changes per tip branch than for non-tip branches. This result is independent of how the virus was amplified (egg grown or kidney cell grown) prior to sequencing or if sequencing was carried out directly on the original clinical specimen by PCR. (v) These excess changes on the tip branches are probably the result of a bias in the choice of strains to sequence and the detection of deleterious mutations that had not yet been removed by negative selection. (vi) There are six hypervariable codons accumulating replacement substitutions at an average rate that is 7.2 times that of the other varied codons. (vii) The number of variable codons in the trunk branches (the winners of the competitive race against the immune system) is 47 ± 5, significantly fewer than in the twigs (90 ± 7), which in turn is significantly fewer variable codons than in tip branches (175 ± 8). (viii) A minimum of one of every 12 branches has nodes at opposite ends representing viruses that reside on different continents. This is, however, no more than would be expected if one were to randomly reassign the continent of origin of the isolates. (ix) Of 99 codons with at least four mutations, 31 have ratios of non-silent to silent changes with probabilities less than 0.05 of occurring by chance, and 14 of those have probabilities <0.005. These observations strongly support positive Darwinian selection. We suggest that the small number of variable positions along the successful trunk lineage, together with knowledge of the codons that have shown positive selection, may provide clues that permit an improved prediction of which strains will cause epidemics and therefore should be used for vaccine production.
Resumo:
Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.
Resumo:
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Resumo:
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.
Resumo:
We have used a solution-based DNA cyclization assay and a gel-phasing method to show that contrary to previous reports [Kerppola, T. K. & Curran, T. (1991) Cell 66, 317-326], basic region leucine zipper proteins Fos and Jun do not significantly bend their AP-1 recognition site. We have constructed two sets of DNA constructs that contain the 7-bp 5'-TGACTCA-3' AP-1 binding site, from either the yeast or the human collagenase gene, which is well separated from and phased by 3-4 helical turns against an A tract-directed bend. The cyclization probabilities of DNAs with altered phasings are not significantly affected by Fos-Jun binding. Similarly, Fos-Jun and Jun-Jun bound to differently phased DNA constructs show insignificant variations in gel mobilities. Both these methods independently indicate that Fos and Jun bend their AP-1 target site by <5 degrees, an observation that has important implications in understanding their mechanism of transcriptional regulation.