951 resultados para Probabilities


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent studies have demonstrated the importance of recipient HLA-DRB1 allele disparity in the development of acute graft-versus-host disease (GVHD) after unrelated donor marrow transplantation. The role of HLA-DQB1 allele disparity in this clinical setting is unknown. To elucidate the biological importance of HLA-DQB1, we conducted a retrospective analysis of 449 HLA-A, -B, and -DR serologically matched unrelated donor transplants. Molecular typing of HLA-DRB1 and HLA-DQB1 alleles revealed 335 DRB1 and DQB1 matched pairs; 41 DRB1 matched and DQB1 mismatched pairs; 48 DRB1 mismatched and DQB1 matched pairs; and 25 DRB1 and DQB1 mismatched pairs. The conditional probabilities of grades III-IV acute GVHD were 0.42, 0.61, 0.55, and 0.71, respectively. The relative risk of acute GVHD associated with a single locus HLA-DQB1 mismatch was 1.8 (1.1, 2.7; P = 0.01), and the risk associated with any HLA-DQB1 and/or HLA-DRB1 mismatch was 1.6 (1.2, 2.2; P = 0.003). These results provide evidence that HLA-DQ is a transplant antigen and suggest that evaluation of both HLA-DQB1 and HLA-DRB1 is necessary in selecting potential donors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plasma processing is a standard industrial method for the modification of material surfaces and the deposition of thin films. Polyatomic ions and neutrals larger than a triatomic play a critical role in plasma-induced surface chemistry, especially in the deposition of polymeric films from fluorocarbon plasmas. In this paper, low energy CF3+ and C3F5+ ions are used to modify a polystyrene surface. Experimental and computational studies are combined to quantify the effect of the unique chemistry and structure of the incident ions on the result of ion-polymer collisions. C3F5+ ions are more effective at growing films than CF3+, both at similar energy/atom of ≈6 eV/atom and similar total kinetic energies of 25 and 50 eV. The composition of the films grown experimentally also varies with both the structure and kinetic energy of the incident ion. Both C3F5+ and CF3+ should be thought of as covalently bound polyatomic precursors or fragments that can react and become incorporated within the polystyrene surface, rather than merely donating F atoms. The size and structure of the ions affect polymer film formation via differing chemical structure, reactivity, sticking probabilities, and energy transfer to the surface. The different reactivity of these two ions with the polymer surface supports the argument that larger species contribute to the deposition of polymeric films from fluorocarbon plasmas. These results indicate that complete understanding and accurate computer modeling of plasma–surface modification requires accurate measurement of the identities, number densities, and kinetic energies of higher mass ions and energetic neutrals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The availability of complete genome sequences and mRNA expression data for all genes creates new opportunities and challenges for identifying DNA sequence motifs that control gene expression. An algorithm, “MobyDick,” is presented that decomposes a set of DNA sequences into the most probable dictionary of motifs or words. This method is applicable to any set of DNA sequences: for example, all upstream regions in a genome or all genes expressed under certain conditions. Identification of words is based on a probabilistic segmentation model in which the significance of longer words is deduced from the frequency of shorter ones of various lengths, eliminating the need for a separate set of reference data to define probabilities. We have built a dictionary with 1,200 words for the 6,000 upstream regulatory regions in the yeast genome; the 500 most significant words (some with as few as 10 copies in all of the upstream regions) match 114 of 443 experimentally determined sites (a significance level of 18 standard deviations). When analyzing all of the genes up-regulated during sporulation as a group, we find many motifs in addition to the few previously identified by analyzing the subclusters individually to the expression subclusters. Applying MobyDick to the genes derepressed when the general repressor Tup1 is deleted, we find known as well as putative binding sites for its regulatory partners.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single-stranded regions in RNA secondary structure are important for RNA–RNA and RNA–protein interactions. We present a probability profile approach for the prediction of these regions based on a statistical algorithm for sampling RNA secondary structures. For the prediction of phylogenetically-determined single-stranded regions in secondary structures of representative RNA sequences, the probability profile offers substantial improvement over the minimum free energy structure. In designing antisense oligonucleotides, a practical problem is how to select a secondary structure for the target mRNA from the optimal structure(s) and many suboptimal structures with similar free energies. By summarizing the information from a statistical sample of probable secondary structures in a single plot, the probability profile not only presents a solution to this dilemma, but also reveals ‘well-determined’ single-stranded regions through the assignment of probabilities as measures of confidence in predictions. In antisense application to the rabbit β-globin mRNA, a significant correlation between hybridization potential predicted by the probability profile and the degree of inhibition of in vitro translation suggests that the probability profile approach is valuable for the identification of effective antisense target sites. Coupling computational design with DNA–RNA array technique provides a rational, efficient framework for antisense oligonucleotide screening. This framework has the potential for high-throughput applications to functional genomics and drug target validation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tranformed-rule up and down psychophysical methods have gained great popularity, mainly because they combine criterion-free responses with an adaptive procedure allowing rapid determination of an average stimulus threshold at various criterion levels of correct responses. The statistical theory underlying the methods now in routine use is based on sets of consecutive responses with assumed constant probabilities of occurrence. The response rules requiring consecutive responses prevent the possibility of using the most desirable response criterion, that of 75% correct responses. The earliest transformed-rule up and down method, whose rules included nonconsecutive responses, did not contain this limitation but failed to become generally accepted, lacking a published theoretical foundation. Such a foundation is provided in this article and is validated empirically with the help of experiments on human subjects and a computer simulation. In addition to allowing the criterion of 75% correct responses, the method is more efficient than the methods excluding nonconsecutive responses in their rules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have studied the HA1 domain of 254 human influenza A(H3N2) virus genes for clues that might help identify characteristics of hemagglutinins (HAs) of circulating strains that are predictive of that strain’s epidemic potential. Our preliminary findings include the following. (i) The most parsimonious tree found requires 1,260 substitutions of which 712 are silent and 548 are replacement substitutions. (ii) The HA1 portion of the HA gene is evolving at a rate of 5.7 nucleotide substitutions/year or 5.7 × 10−3 substitutions/site per year. (iii) The replacement substitutions are distributed randomly across the three positions of the codon when allowance is made for the number of ways each codon can change the encoded amino acid. (iv) The replacement substitutions are not distributed randomly over the branches of the tree, there being 2.2 times more changes per tip branch than for non-tip branches. This result is independent of how the virus was amplified (egg grown or kidney cell grown) prior to sequencing or if sequencing was carried out directly on the original clinical specimen by PCR. (v) These excess changes on the tip branches are probably the result of a bias in the choice of strains to sequence and the detection of deleterious mutations that had not yet been removed by negative selection. (vi) There are six hypervariable codons accumulating replacement substitutions at an average rate that is 7.2 times that of the other varied codons. (vii) The number of variable codons in the trunk branches (the winners of the competitive race against the immune system) is 47 ± 5, significantly fewer than in the twigs (90 ± 7), which in turn is significantly fewer variable codons than in tip branches (175 ± 8). (viii) A minimum of one of every 12 branches has nodes at opposite ends representing viruses that reside on different continents. This is, however, no more than would be expected if one were to randomly reassign the continent of origin of the isolates. (ix) Of 99 codons with at least four mutations, 31 have ratios of non-silent to silent changes with probabilities less than 0.05 of occurring by chance, and 14 of those have probabilities <0.005. These observations strongly support positive Darwinian selection. We suggest that the small number of variable positions along the successful trunk lineage, together with knowledge of the codons that have shown positive selection, may provide clues that permit an improved prediction of which strains will cause epidemics and therefore should be used for vaccine production.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have used a solution-based DNA cyclization assay and a gel-phasing method to show that contrary to previous reports [Kerppola, T. K. & Curran, T. (1991) Cell 66, 317-326], basic region leucine zipper proteins Fos and Jun do not significantly bend their AP-1 recognition site. We have constructed two sets of DNA constructs that contain the 7-bp 5'-TGACTCA-3' AP-1 binding site, from either the yeast or the human collagenase gene, which is well separated from and phased by 3-4 helical turns against an A tract-directed bend. The cyclization probabilities of DNAs with altered phasings are not significantly affected by Fos-Jun binding. Similarly, Fos-Jun and Jun-Jun bound to differently phased DNA constructs show insignificant variations in gel mobilities. Both these methods independently indicate that Fos and Jun bend their AP-1 target site by <5 degrees, an observation that has important implications in understanding their mechanism of transcriptional regulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The controversy over the interpretation of DNA profile evidence in forensic identification can be attributed in part to confusion over the mode(s) of statistical inference appropriate to this setting. Although there has been substantial discussion in the literature of, for example, the role of population genetics issues, few authors have made explicit the inferential framework which underpins their arguments. This lack of clarity has led both to unnecessary debates over ill-posed or inappropriate questions and to the neglect of some issues which can have important consequences. We argue that the mode of statistical inference which seems to underlie the arguments of some authors, based on a hypothesis testing framework, is not appropriate for forensic identification. We propose instead a logically coherent framework in which, for example, the roles both of the population genetics issues and of the nonscientific evidence in a case are incorporated. Our analysis highlights several widely held misconceptions in the DNA profiling debate. For example, the profile frequency is not directly relevant to forensic inference. Further, very small match probabilities may in some settings be consistent with acquittal. Although DNA evidence is typically very strong, our analysis of the coherent approach highlights situations which can arise in practice where alternative methods for assessing DNA evidence may be misleading.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speech recognition involves three processes: extraction of acoustic indices from the speech signal, estimation of the probability that the observed index string was caused by a hypothesized utterance segment, and determination of the recognized utterance via a search among hypothesized alternatives. This paper is not concerned with the first process. Estimation of the probability of an index string involves a model of index production by any given utterance segment (e.g., a word). Hidden Markov models (HMMs) are used for this purpose [Makhoul, J. & Schwartz, R. (1995) Proc. Natl. Acad. Sci. USA 92, 9956-9963]. Their parameters are state transition probabilities and output probability distributions associated with the transitions. The Baum algorithm that obtains the values of these parameters from speech data via their successive reestimation will be described in this paper. The recognizer wishes to find the most probable utterance that could have caused the observed acoustic index string. That probability is the product of two factors: the probability that the utterance will produce the string and the probability that the speaker will wish to produce the utterance (the language model probability). Even if the vocabulary size is moderate, it is impossible to search for the utterance exhaustively. One practical algorithm is described [Viterbi, A. J. (1967) IEEE Trans. Inf. Theory IT-13, 260-267] that, given the index string, has a high likelihood of finding the most probable utterance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introdução: O diagnóstico histológico das estenoses biliares é fundamental na definição da terapêutica a ser empregada, devido à heterogeneidade dos resultados dos estudos comparando o uso do escovado citológico e da biópsia transpapilar na colangiopancreatografia retrógada endoscópica (CPRE) com a punção aspirativa ecoguiada com agulha fina (ECO-PAAF) no diagnóstico histológico da estenose biliar maligna, e o fato de não existirem revisões sistemáticas e metanálises comparando esses métodos, este estudo propõe comparar esses dois métodos no diagnóstico histológico da estenose biliar maligna, através de revisão sistemática e metanálise da literatura. Métodos: Utilizando as bases de dados eletrônicas Medline, Embase, Cochrane, LILACS, CINAHL, e Scopus foram pesquisados estudos datados anteriormente a novembro de 2014. De um total de 1009 estudos publicados, foram selecionados três estudos prospectivos comparando ECO-PAAF e CPRE no diagnóstico histológico da estenose biliar maligna e cinco estudos transversais comparando ECO-PAAF com o mesmo padrão-ouro dos outros três estudos comparativos. Todos os pacientes foram submetidos ao mesmo padrão-ouro. Foram calculadas as variáveis do estudo (prevalência, sensibilidade, especificidade, valores preditivos positivos e negativos e acurácia) e realizada a metanálise utilizando os softwares Rev Man 5 e Meta-DiSc 1.4. Resultados: Um total de 294 pacientes foi incluído na análise. A probabilidade pré-teste para estenose biliar maligna foi de 76,66%. As sensibilidades médias da CPRE e da ECO-PAAF para o diagnóstico histológico da estenose biliar maligna foram de 49% e 76,5%, respectivamente; especificidades foram de 96,33% e 100%, respectivamente. As probabilidades pós-teste também foram determinadas: valores preditivos positivos de 98,33% e 100%, respectivamente, e valores preditivos negativos de 34% e 58,87%. As acurácias foram 60,66% e 82,25%, respectivamente. Conclusão: A ECO-PAAF é superior a CPRE com escovado citológico e/ou biópsia transpapilar no diagnóstico histológico da estenose biliar maligna. No entanto, um teste de ECO-PAAF ou CPRE com amostra histológica negativa não pode excluir a estenose biliar maligna, pois ambos os testes apresentam baixo valor preditivo negativo

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purposes of this study were (1) to validate of the item-attribute matrix using two levels of attributes (Level 1 attributes and Level 2 sub-attributes), and (2) through retrofitting the diagnostic models to the mathematics test of the Trends in International Mathematics and Science Study (TIMSS), to evaluate the construct validity of TIMSS mathematics assessment by comparing the results of two assessment booklets. Item data were extracted from Booklets 2 and 3 for the 8th grade in TIMSS 2007, which included a total of 49 mathematics items and every student's response to every item. The study developed three categories of attributes at two levels: content, cognitive process (TIMSS or new), and comprehensive cognitive process (or IT) based on the TIMSS assessment framework, cognitive procedures, and item type. At level one, there were 4 content attributes (number, algebra, geometry, and data and chance), 3 TIMSS process attributes (knowing, applying, and reasoning), and 4 new process attributes (identifying, computing, judging, and reasoning). At level two, the level 1 attributes were further divided into 32 sub-attributes. There was only one level of IT attributes (multiple steps/responses, complexity, and constructed-response). Twelve Q-matrices (4 originally specified, 4 random, and 4 revised) were investigated with eleven Q-matrix models (QM1 ~ QM11) using multiple regression and the least squares distance method (LSDM). Comprehensive analyses indicated that the proposed Q-matrices explained most of the variance in item difficulty (i.e., 64% to 81%). The cognitive process attributes contributed to the item difficulties more than the content attributes, and the IT attributes contributed much more than both the content and process attributes. The new retrofitted process attributes explained the items better than the TIMSS process attributes. Results generated from the level 1 attributes and the level 2 attributes were consistent. Most attributes could be used to recover students' performance, but some attributes' probabilities showed unreasonable patterns. The analysis approaches could not demonstrate if the same construct validity was supported across booklets. The proposed attributes and Q-matrices explained the items of Booklet 2 better than the items of Booklet 3. The specified Q-matrices explained the items better than the random Q-matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Citations received by papers published within a journal serve to increase its bibliometric impact. The objective of this paper was to assess the influence of publication language, article type, number of authors, and year of publication on the citations received by papers published in Gaceta Sanitaria, a Spanish-language journal of public health. Methods: The information sources were the journal website and the Web of Knowledge, of the Institute of Scientific Information. The period analyzed was from 2007 to 2010. We included original articles, brief original articles, and reviews published within that period. We extracted manually information regarding the variables analyzed and we also differentiated among total citations and self-citations. We constructed logistic regression models to analyze the probability of a Gaceta Sanitaria paper to be cited or not, taking into account the aforementioned independent variables. We also analyzed the probability of receiving citations from non-Spanish authors. Results: Two hundred forty papers fulfilled the inclusion criteria. The included papers received a total of 287 citations, which became 202 when excluding self-citations. The only variable influencing the probability of being cited was the publication year. After excluding never cited papers, time since publication and review papers had the highest probabilities of being cited. Papers in English and review articles had a higher probability of citation from non-Spanish authors. Conclusions: Publication language has no influence on the citations received by a national, non-English journal. Reviews in English have the highest probability of citation from abroad. Editors should decide how to manage this information when deciding policies to raise the bibliometric impact factor of their journals.