968 resultados para Weibull Probability Plot
Resumo:
The effects of cell toxicity are known to be inherent in carcinogenesis induced by radiation or chemical carcinogens. The event of cell death precludes tumor induction from occurring. A long standing problem is to estimate the proportion of initiated cells that die before tumor induction. No experimental techniques are currently available for directly gauging the rate of cell death over extended periods of time. The obstacle can be surmounted by newly developed theoretical methods of carcinogenesis modeling. In this paper, we apply such methods to published data on multiple lung tumors in mice receiving different schedules of urethane. Bioassays of this type play an important role in testing environmental chemicals for carcinogenic activity. Our estimates for urethane-induced carcinogenesis show that, unexpectedly, many initiated cells die early in the course of tumor promotion. We present numerical estimates for the probability of initiated cell death for different schedules (and doses) of urethane administration.
Resumo:
Structural genomics aims to solve a large number of protein structures that represent the protein space. Currently an exhaustive solution for all structures seems prohibitively expensive, so the challenge is to define a relatively small set of proteins with new, currently unknown folds. This paper presents a method that assigns each protein with a probability of having an unsolved fold. The method makes extensive use of protomap, a sequence-based classification, and scop, a structure-based classification. According to protomap, the protein space encodes the relationship among proteins as a graph whose vertices correspond to 13,354 clusters of proteins. A representative fold for a cluster with at least one solved protein is determined after superposition of all scop (release 1.37) folds onto protomap clusters. Distances within the protomap graph are computed from each representative fold to the neighboring folds. The distribution of these distances is used to create a statistical model for distances among those folds that are already known and those that have yet to be discovered. The distribution of distances for solved/unsolved proteins is significantly different. This difference makes it possible to use Bayes' rule to derive a statistical estimate that any protein has a yet undetermined fold. Proteins that score the highest probability to represent a new fold constitute the target list for structural determination. Our predicted probabilities for unsolved proteins correlate very well with the proportion of new folds among recently solved structures (new scop 1.39 records) that are disjoint from our original training set.
Resumo:
Single-stranded regions in RNA secondary structure are important for RNA–RNA and RNA–protein interactions. We present a probability profile approach for the prediction of these regions based on a statistical algorithm for sampling RNA secondary structures. For the prediction of phylogenetically-determined single-stranded regions in secondary structures of representative RNA sequences, the probability profile offers substantial improvement over the minimum free energy structure. In designing antisense oligonucleotides, a practical problem is how to select a secondary structure for the target mRNA from the optimal structure(s) and many suboptimal structures with similar free energies. By summarizing the information from a statistical sample of probable secondary structures in a single plot, the probability profile not only presents a solution to this dilemma, but also reveals ‘well-determined’ single-stranded regions through the assignment of probabilities as measures of confidence in predictions. In antisense application to the rabbit β-globin mRNA, a significant correlation between hybridization potential predicted by the probability profile and the degree of inhibition of in vitro translation suggests that the probability profile approach is valuable for the identification of effective antisense target sites. Coupling computational design with DNA–RNA array technique provides a rational, efficient framework for antisense oligonucleotide screening. This framework has the potential for high-throughput applications to functional genomics and drug target validation.
Resumo:
ATP-binding cassette (ABC) transporters bind and hydrolyze ATP. In the cystic fibrosis transmembrane conductance regulator Cl− channel, this interaction with ATP generates a gating cycle between a closed (C) and two open (O1 and O2) conformations. To understand better how ATP controls channel activity, we examined gating transitions from the C to the O1 and O2 states and from these open states to the C conformation. We made three main observations. First, we found that the channel can open into either the O1 or O2 state, that the frequency of transitions to both states was increased by ATP concentration, and that ATP increased the relative proportion of openings into O1 vs. O2. These results indicate that ATP can interact with the closed state to open the channel in at least two ways, which may involve binding to nucleotide-binding domains (NBDs) NBD1 and NBD2. Second, ATP prolonged the burst duration and altered the way in which the channel closed. These data suggest that ATP also interacts with the open channel. Third, the channel showed runs of specific types of open–closed transitions. This finding suggests a mechanism with more than one cycle of gating transitions. These data suggest models to explain how ATP influences conformational transitions in cystic fibrosis transmembrane conductance regulator and perhaps other ABC transporters.
Resumo:
The reconstruction of multitaxon trees from molecular sequences is confounded by the variety of algorithms and criteria used to evaluate trees, making it difficult to compare the results of different analyses. A global method of multitaxon phylogenetic reconstruction described here, Bootstrappers Gambit, can be used with any four-taxon algorithm, including distance, maximum likelihood, and parsimony methods. It incorporates a Bayesian-Jeffreys'-bootstrap analysis to provide a uniform probability-based criterion for comparing the results from diverse algorithms. To examine the usefulness of the method, the origin of the eukaryotes has been investigated by the analysis of ribosomal small subunit RNA sequences. Three common algorithms (paralinear distances, Jukes-Cantor distances, and Kimura distances) support the eocyte topology, whereas one (maximum parsimony) supports the archaebacterial topology, suggesting that the eocyte prokaryotes are the closest prokaryotic relatives of the eukaryotes.
Resumo:
We have studied enhancer function in transient and stable expression assays in mammalian cells by using systems that distinguish expressing from nonexpressing cells. When expression is studied in this way, enhancers are found to increase the probability of a construct being active but not the level of expression per template. In stably integrated constructs, large differences in expression level are observed but these are not related to the presence of an enhancer. Together with earlier studies, these results suggest that enhancers act to affect a binary (on/off) switch in transcriptional activity. Although this idea challenges the widely accepted model of enhancer activity, it is consistent with much, if not all, experimental evidence on this subject. We hypothesize that enhancers act to increase the probability of forming a stably active template. When randomly integrated into the genome, enhancers may affect a metastable state of repression/activity, permitting expression in regions that would not permit activity of an isolated promoter.
Resumo:
O conhecimento da interferência do porta-enxerto no desenvolvimento da copa é muito importante, pois tais interações para cada combinação porta-enxerto e cultivar copa adotada pode variar. Dessa forma, o objetivo deste trabalho foi avaliar a interferência dos porta-enxertos \'IAC 766\', \'IAC 572\', \'IAC 313\', \'IAC 571-6\' e \'Ripária do Traviú\', no desenvolvimento e fertilidade das gemas da cultivar copa \'Niagara Rosada\'. O estudo foi realizado em videira \'Niagara Rosada\' conduzida sob o sistema de espaldeira, na região de Jundiaí - SP. As avaliações foram realizadas em três ciclos de produção, destes, dois foram realizados no ciclo tradicional, em que a poda de produção é realizada no inverno e a colheita no fim da primavera e início de verão, e um foi realizado na denominada safrinha, em que a poda de produção ocorre no verão e a colheita no fim do outono e início do inverno. Para os ciclos de produção tradicionais avaliou-se a fertilidade da primeira até a quarta ou quinta gema, o número de brotação, o número de cacho e o comprimento final dos ramos em 2014 e 2015. No ciclo de produção safrinha avaliou-se a fertilidade da quinta até a oitava gema e os demais dados biométricos durante o cultivo. Nesse ciclo de produção avaliou-se a produção e qualidade de frutos da videira. O delineamento experimental adotado foi em blocos casualizados em esquema de parcela subdividida para fertilidade de gemas, e em blocos casualizados para as demais variáveis. Os dados coletados foram submetidos à análise de variância, e quando significativo, os dados foram submetidos ao teste de comparação de médias Tukey a 0,05 de probabilidade. Não foi observado efeito dos diferentes porta-enxertos na fertilidade de gemas, no desenvolvimento da planta, na produção e qualidade dos frutos de videira \'Niagara Rosada\'.
Resumo:
Neste trabalho, foi proposta uma nova família de distribuições, a qual permite modelar dados de sobrevivência quando a função de risco tem formas unimodal e U (banheira). Ainda, foram consideradas as modificações das distribuições Weibull, Fréchet, half-normal generalizada, log-logística e lognormal. Tomando dados não-censurados e censurados, considerou-se os estimadores de máxima verossimilhança para o modelo proposto, a fim de verificar a flexibilidade da nova família. Além disso, um modelo de regressão locação-escala foi utilizado para verificar a influência de covariáveis nos tempos de sobrevida. Adicionalmente, conduziu-se uma análise de resíduos baseada nos resíduos deviance modificada. Estudos de simulação, utilizando-se de diferentes atribuições dos parâmetros, porcentagens de censura e tamanhos amostrais, foram conduzidos com o objetivo de verificar a distribuição empírica dos resíduos tipo martingale e deviance modificada. Para detectar observações influentes, foram utilizadas medidas de influência local, que são medidas de diagnóstico baseadas em pequenas perturbações nos dados ou no modelo proposto. Podem ocorrer situações em que a suposição de independência entre os tempos de falha e censura não seja válida. Assim, outro objetivo desse trabalho é considerar o mecanismo de censura informativa, baseado na verossimilhança marginal, considerando a distribuição log-odd log-logística Weibull na modelagem. Por fim, as metodologias descritas são aplicadas a conjuntos de dados reais.
Resumo:
It is well known that quantum correlations for bipartite dichotomic measurements are those of the form (Formula presented.), where the vectors ui and vj are in the unit ball of a real Hilbert space. In this work we study the probability of the nonlocal nature of these correlations as a function of (Formula presented.), where the previous vectors are sampled according to the Haar measure in the unit sphere of (Formula presented.). In particular, we prove the existence of an (Formula presented.) such that if (Formula presented.), (Formula presented.) is nonlocal with probability tending to 1 as (Formula presented.), while for (Formula presented.), (Formula presented.) is local with probability tending to 1 as (Formula presented.).
Resumo:
In this paper we introduce the concept of Lateral Trigger Probability (LTP) function, i.e., the probability for an Extensive Air Shower (EAS) to trigger an individual detector of a ground based array as a function of distance to the shower axis, taking into account energy, mass and direction of the primary cosmic ray. We apply this concept to the surface array of the Pierre Auger Observatory consisting of a 1.5 km spaced grid of about 1600 water Cherenkov stations. Using Monte Carlo simulations of ultra-high energy showers the LTP functions are derived for energies in the range between 10(17) and 10(19) eV and zenith angles up to 65 degrees. A parametrization combining a step function with an exponential is found to reproduce them very well in the considered range of energies and zenith angles. The LTP functions can also be obtained from data using events simultaneously observed by the fluorescence and the surface detector of the Pierre Auger Observatory (hybrid events). We validate the Monte Carlo results showing how LTP functions from data are in good agreement with simulations.
Resumo:
This paper proposes an adaptive algorithm for clustering cumulative probability distribution functions (c.p.d.f.) of a continuous random variable, observed in different populations, into the minimum homogeneous clusters, making no parametric assumptions about the c.p.d.f.’s. The distance function for clustering c.p.d.f.’s that is proposed is based on the Kolmogorov–Smirnov two sample statistic. This test is able to detect differences in position, dispersion or shape of the c.p.d.f.’s. In our context, this statistic allows us to cluster the recorded data with a homogeneity criterion based on the whole distribution of each data set, and to decide whether it is necessary to add more clusters or not. In this sense, the proposed algorithm is adaptive as it automatically increases the number of clusters only as necessary; therefore, there is no need to fix in advance the number of clusters. The output of the algorithm are the common c.p.d.f. of all observed data in the cluster (the centroid) and, for each cluster, the Kolmogorov–Smirnov statistic between the centroid and the most distant c.p.d.f. The proposed algorithm has been used for a large data set of solar global irradiation spectra distributions. The results obtained enable to reduce all the information of more than 270,000 c.p.d.f.’s in only 6 different clusters that correspond to 6 different c.p.d.f.’s.
Resumo:
In the cs.index.zip file we provide an R script which let us plot the conditioned Gini (or skewness) coefficient used in the working paper entitled "On conditional skewness with applications in environmental data" submitted to Environmental and Ecological Statistics. On the other hand, the ReadMe.pdf explains how to use the cs.index.R script.
Resumo:
In 1991, Bryant and Eckard estimated the annual probability that a cartel would be detected by the US Federal authorities, conditional on being detected, to be at most between 13 % and 17 %. 15 years later, we estimated the same probability over a European sample and we found an annual probability that falls between 12.9 % and 13.3 %. We also develop a detection model to clarify this probability. Our estimate is based on detection durations, calculated from data reported for all the cartels convicted by the European Commission from 1969 to the present date, and a statistical birth and death process model describing the onset and detection of cartels.
Resumo:
Both the EU and its member states are in a period of rethinking security strategy to adapt to contemporary challenges both in the European region and beyond, including Northeast Asia. In this Security Policy Brief, Mason Richey discusses what difficulties and risks a North Korean regime collapse would pose, the likelihood that it will occur sooner rather than later, and how Europe will be affected by such a scenario.