951 resultados para exceedance probabilities
Resumo:
La metodología Integrated Safety Analysis (ISA), desarrollada en el área de Modelación y Simulación (MOSI) del Consejo de Seguridad Nuclear (CSN), es un método de Análisis Integrado de Seguridad que está siendo evaluado y analizado mediante diversas aplicaciones impulsadas por el CSN; el análisis integrado de seguridad, combina las técnicas evolucionadas de los análisis de seguridad al uso: deterministas y probabilistas. Se considera adecuado para sustentar la Regulación Informada por el Riesgo (RIR), actual enfoque dado a la seguridad nuclear y que está siendo desarrollado y aplicado en todo el mundo. En este contexto se enmarcan, los proyectos Safety Margin Action Plan (SMAP) y Safety Margin Assessment Application (SM2A), impulsados por el Comité para la Seguridad de las Instalaciones Nucleares (CSNI) de la Agencia de la Energía Nuclear (NEA) de la Organización para la Cooperación y el Desarrollo Económicos (OCDE) en el desarrollo del enfoque adecuado para el uso de las metodologías integradas en la evaluación del cambio en los márgenes de seguridad debidos a cambios en las condiciones de las centrales nucleares. El comité constituye un foro para el intercambio de información técnica y de colaboración entre las organizaciones miembro, que aportan sus propias ideas en investigación, desarrollo e ingeniería. La propuesta del CSN es la aplicación de la metodología ISA, especialmente adecuada para el análisis según el enfoque desarrollado en el proyecto SMAP que pretende obtener los valores best-estimate con incertidumbre de las variables de seguridad que son comparadas con los límites de seguridad, para obtener la frecuencia con la que éstos límites son superados. La ventaja que ofrece la ISA es que permite el análisis selectivo y discreto de los rangos de los parámetros inciertos que tienen mayor influencia en la superación de los límites de seguridad, o frecuencia de excedencia del límite, permitiendo así evaluar los cambios producidos por variaciones en el diseño u operación de la central que serían imperceptibles o complicados de cuantificar con otro tipo de metodologías. La ISA se engloba dentro de las metodologías de APS dinámico discreto que utilizan la generación de árboles de sucesos dinámicos (DET) y se basa en la Theory of Stimulated Dynamics (TSD), teoría de fiabilidad dinámica simplificada que permite la cuantificación del riesgo de cada una de las secuencias. Con la ISA se modelan y simulan todas las interacciones relevantes en una central: diseño, condiciones de operación, mantenimiento, actuaciones de los operadores, eventos estocásticos, etc. Por ello requiere la integración de códigos de: simulación termohidráulica y procedimientos de operación; delineación de árboles de sucesos; cuantificación de árboles de fallos y sucesos; tratamiento de incertidumbres e integración del riesgo. La tesis contiene la aplicación de la metodología ISA al análisis integrado del suceso iniciador de la pérdida del sistema de refrigeración de componentes (CCWS) que genera secuencias de pérdida de refrigerante del reactor a través de los sellos de las bombas principales del circuito de refrigerante del reactor (SLOCA). Se utiliza para probar el cambio en los márgenes, con respecto al límite de la máxima temperatura de pico de vaina (1477 K), que sería posible en virtud de un potencial aumento de potencia del 10 % en el reactor de agua a presión de la C.N. Zion. El trabajo realizado para la consecución de la tesis, fruto de la colaboración de la Escuela Técnica Superior de Ingenieros de Minas y Energía y la empresa de soluciones tecnológicas Ekergy Software S.L. (NFQ Solutions) con el área MOSI del CSN, ha sido la base para la contribución del CSN en el ejercicio SM2A. Este ejercicio ha sido utilizado como evaluación del desarrollo de algunas de las ideas, sugerencias, y los algoritmos detrás de la metodología ISA. Como resultado se ha obtenido un ligero aumento de la frecuencia de excedencia del daño (DEF) provocado por el aumento de potencia. Este resultado demuestra la viabilidad de la metodología ISA para obtener medidas de las variaciones en los márgenes de seguridad que han sido provocadas por modificaciones en la planta. También se ha mostrado que es especialmente adecuada en escenarios donde los eventos estocásticos o las actuaciones de recuperación o mitigación de los operadores pueden tener un papel relevante en el riesgo. Los resultados obtenidos no tienen validez más allá de la de mostrar la viabilidad de la metodología ISA. La central nuclear en la que se aplica el estudio está clausurada y la información relativa a sus análisis de seguridad es deficiente, por lo que han sido necesarias asunciones sin comprobación o aproximaciones basadas en estudios genéricos o de otras plantas. Se han establecido tres fases en el proceso de análisis: primero, obtención del árbol de sucesos dinámico de referencia; segundo, análisis de incertidumbres y obtención de los dominios de daño; y tercero, cuantificación del riesgo. Se han mostrado diversas aplicaciones de la metodología y ventajas que presenta frente al APS clásico. También se ha contribuido al desarrollo del prototipo de herramienta para la aplicación de la metodología ISA (SCAIS). ABSTRACT The Integrated Safety Analysis methodology (ISA), developed by the Consejo de Seguridad Nuclear (CSN), is being assessed in various applications encouraged by CSN. An Integrated Safety Analysis merges the evolved techniques of the usually applied safety analysis methodologies; deterministic and probabilistic. It is considered as a suitable tool for assessing risk in a Risk Informed Regulation framework, the approach under development that is being adopted on Nuclear Safety around the world. In this policy framework, the projects Safety Margin Action Plan (SMAP) and Safety Margin Assessment Application (SM2A), set up by the Committee on the Safety of Nuclear Installations (CSNI) of the Nuclear Energy Agency within the Organization for Economic Co-operation and Development (OECD), were aimed to obtain a methodology and its application for the integration of risk and safety margins in the assessment of the changes to the overall safety as a result of changes in the nuclear plant condition. The committee provides a forum for the exchange of technical information and cooperation among member organizations which contribute their respective approaches in research, development and engineering. The ISA methodology, proposed by CSN, specially fits with the SMAP approach that aims at obtaining Best Estimate Plus Uncertainty values of the safety variables to be compared with the safety limits. This makes it possible to obtain the exceedance frequencies of the safety limit. The ISA has the advantage over other methods of allowing the specific and discrete evaluation of the most influential uncertain parameters in the limit exceedance frequency. In this way the changes due to design or operation variation, imperceptibles or complicated to by quantified by other methods, are correctly evaluated. The ISA methodology is one of the discrete methodologies of the Dynamic PSA framework that uses the generation of dynamic event trees (DET). It is based on the Theory of Stimulated Dynamics (TSD), a simplified version of the theory of Probabilistic Dynamics that allows the risk quantification. The ISA models and simulates all the important interactions in a Nuclear Power Plant; design, operating conditions, maintenance, human actuations, stochastic events, etc. In order to that, it requires the integration of codes to obtain: Thermohydraulic and human actuations; Even trees delineation; Fault Trees and Event Trees quantification; Uncertainty analysis and risk assessment. This written dissertation narrates the application of the ISA methodology to the initiating event of the Loss of the Component Cooling System (CCWS) generating sequences of loss of reactor coolant through the seals of the reactor coolant pump (SLOCA). It is used to test the change in margins with respect to the maximum clad temperature limit (1477 K) that would be possible under a potential 10 % power up-rate effected in the pressurized water reactor of Zion NPP. The work done to achieve the thesis, fruit of the collaborative agreement of the School of Mining and Energy Engineering and the company of technological solutions Ekergy Software S.L. (NFQ Solutions) with de specialized modeling and simulation branch of the CSN, has been the basis for the contribution of the CSN in the exercise SM2A. This exercise has been used as an assessment of the development of some of the ideas, suggestions, and algorithms behind the ISA methodology. It has been obtained a slight increase in the Damage Exceedance Frequency (DEF) caused by the power up-rate. This result shows that ISA methodology allows quantifying the safety margin change when design modifications are performed in a NPP and is specially suitable for scenarios where stochastic events or human responses have an important role to prevent or mitigate the accidental consequences and the total risk. The results do not have any validity out of showing the viability of the methodology ISA. Zion NPP was retired and information of its safety analysis is scarce, so assumptions without verification or approximations based on generic studies have been required. Three phases are established in the analysis process: first, obtaining the reference dynamic event tree; second, uncertainty analysis and obtaining the damage domains; third, risk quantification. There have been shown various applications of the methodology and advantages over the classical PSA. It has also contributed to the development of the prototype tool for the implementation of the ISA methodology (SCAIS).
Resumo:
This paper analyses the effects of policy making for air pollution abatement in Spain between 2000 and 2020 under an integrated assessment approach with the AERIS model for number of pollutants (NOx/NO2, PM10/PM2.5, O3, SO2, NH3 and VOC). The analysis of the effects of air pollution focused on different aspects: compliance with the European limit values of Directive 2008/50/EC for NO2 and PM10 for the Spanish air quality management areas; the evaluation of impacts caused by the deposition of atmospheric sulphur and nitrogen on ecosystems; the exceedance of critical levels of NO2 and SO2 in forest areas; the analysis of O3-induced crop damage for grapes, maize, potato, rice, tobacco, tomato, watermelon and wheat; health impacts caused by human exposure to O3 and PM2.5; and costs on society due to crop losses (O3), disability-related absence of work staff and damage to buildings and public property due to soot-related soiling (PM2.5). In general, air quality policy making has delivered improvements in air quality levels throughout Spain and has mitigated the severity of the impacts on ecosystems, health and vegetation in 2020 as target year. The findings of this work constitute an appropriate diagnosis for identifying improvement potentials for further mitigation for policy makers and stakeholders in Spain.
Resumo:
O presente estudo busca identificar em que medida o pertencimento de classe social interfere nos sentidos e perspectivas do jovem brasileiro frente ao futebol espetáculo, bem como compreender os mecanismos sociais que determinam a decisão de os sujeitos investirem na carreira profissional esportiva em detrimento da trajetória escolar longa. Começamos com uma abordagem sociológica, a análise crítica dos processos de difusão, massificação e profissionalização do futebol ocorridas no contexto da sociedade pós-industrial. Enfocamos a conflituosa mutação da modalidade, inicialmente elitizada com fins social-distintivos para esporte de massa ideal de ascensão social da classe popular , além da dependência com a mídia, das razões que levaram a caracterizar-se como produto da indústria cultural, culminando com a transformação do futebol em mercadoria submetida às leis e lógica da sociedade de consumo. Em seguida, entrelaçamos as relações de poder, aliança e concorrência dos agentes sociais que participam do complexo campo das práticas esportivas com os canais com que o público jovem estabelece contato com o esporte espetáculo. Neste aspecto, especial atenção foi dada à mídia, difusora da ideologia de uma sociedade capitalista aberta, que reforça a idéia do esporte como via de ascensão social para indivíduos de baixa renda, na mesma medida em que oculta em seu discurso as reais probabilidades de concretização do sucesso esportivo. A pesquisa de campo foi realizada em duas escolas do município de São Bernardo do Campo (SP), uma da rede pública estadual e outra da rede particular de ensino. Assim, constituímos dois grupos com alunos de distintas classes sociais, compostos por estudantes do 1º ano do Ensino Médio, sexo masculino, com 15 anos de idade e praticantes de futebol nas aulas de Educação Física. Metodologicamente, fizemos uso da observação participante e de entrevistas como instrumentos para a coleta de dados. Conjugadas aos objetivos do estudo, estruturamos a análise do material colhido em seis categorias, articulando questões sobre o prosseguimento nos estudos e o trabalho, as tendências para a pratica esportiva profissional, as representações sociais em torno do futebol, os usos e costumes no tempo livre e as expectativas da pratica esportiva implicadas pela herança cultural familiar. Como referencial teórico de análise, utilizamos de Pierre Bourdieu os conceitos de campo, habitus, estratégia, capital econômico, social e, principalmente, capital cultural, partindo da hipótese de que o nível cultural dos alunos e seus familiares interferem nos sentidos e formas de apropriação do esporte. Entre outras conclusões, obtivemos como resultado a configuração de uma trajetória esportiva profissional voltada para os alunos de baixa classe social, em oposição à trajetória escolar longa, estrategicamente adotada pelos alunos de classe social alta.(AU)
Resumo:
O presente estudo busca identificar em que medida o pertencimento de classe social interfere nos sentidos e perspectivas do jovem brasileiro frente ao futebol espetáculo, bem como compreender os mecanismos sociais que determinam a decisão de os sujeitos investirem na carreira profissional esportiva em detrimento da trajetória escolar longa. Começamos com uma abordagem sociológica, a análise crítica dos processos de difusão, massificação e profissionalização do futebol ocorridas no contexto da sociedade pós-industrial. Enfocamos a conflituosa mutação da modalidade, inicialmente elitizada com fins social-distintivos para esporte de massa ideal de ascensão social da classe popular , além da dependência com a mídia, das razões que levaram a caracterizar-se como produto da indústria cultural, culminando com a transformação do futebol em mercadoria submetida às leis e lógica da sociedade de consumo. Em seguida, entrelaçamos as relações de poder, aliança e concorrência dos agentes sociais que participam do complexo campo das práticas esportivas com os canais com que o público jovem estabelece contato com o esporte espetáculo. Neste aspecto, especial atenção foi dada à mídia, difusora da ideologia de uma sociedade capitalista aberta, que reforça a idéia do esporte como via de ascensão social para indivíduos de baixa renda, na mesma medida em que oculta em seu discurso as reais probabilidades de concretização do sucesso esportivo. A pesquisa de campo foi realizada em duas escolas do município de São Bernardo do Campo (SP), uma da rede pública estadual e outra da rede particular de ensino. Assim, constituímos dois grupos com alunos de distintas classes sociais, compostos por estudantes do 1º ano do Ensino Médio, sexo masculino, com 15 anos de idade e praticantes de futebol nas aulas de Educação Física. Metodologicamente, fizemos uso da observação participante e de entrevistas como instrumentos para a coleta de dados. Conjugadas aos objetivos do estudo, estruturamos a análise do material colhido em seis categorias, articulando questões sobre o prosseguimento nos estudos e o trabalho, as tendências para a pratica esportiva profissional, as representações sociais em torno do futebol, os usos e costumes no tempo livre e as expectativas da pratica esportiva implicadas pela herança cultural familiar. Como referencial teórico de análise, utilizamos de Pierre Bourdieu os conceitos de campo, habitus, estratégia, capital econômico, social e, principalmente, capital cultural, partindo da hipótese de que o nível cultural dos alunos e seus familiares interferem nos sentidos e formas de apropriação do esporte. Entre outras conclusões, obtivemos como resultado a configuração de uma trajetória esportiva profissional voltada para os alunos de baixa classe social, em oposição à trajetória escolar longa, estrategicamente adotada pelos alunos de classe social alta.(AU)
Resumo:
Understanding the relationship between animal community dynamics and landscape structure has become a priority for biodiversity conservation. In particular, predicting the effects of habitat destruction that confine species to networks of small patches is an important prerequisite to conservation plan development. Theoretical models that predict the occurrence of species in fragmented landscapes, and relationships between stability and diversity do exist. However, reliable empirical investigations of the dynamics of biodiversity have been prevented by differences in species detection probabilities among landscapes. Using long-term data sampled at a large spatial scale in conjunction with a capture-recapture approach, we developed estimates of parameters of community changes over a 22-year period for forest breeding birds in selected areas of the eastern United States. We show that forest fragmentation was associated not only with a reduced number of forest bird species, but also with increased temporal variability in the number of species. This higher temporal variability was associated with higher local extinction and turnover rates. These results have major conservation implications. Moreover, the approach used provides a practical tool for the study of the dynamics of biodiversity.
Resumo:
Two variables define the topological state of closed double-stranded DNA: the knot type, K, and ΔLk, the linking number difference from relaxed DNA. The equilibrium distribution of probabilities of these states, P(ΔLk, K), is related to two conditional distributions: P(ΔLk|K), the distribution of ΔLk for a particular K, and P(K|ΔLk) and also to two simple distributions: P(ΔLk), the distribution of ΔLk irrespective of K, and P(K). We explored the relationships between these distributions. P(ΔLk, K), P(ΔLk), and P(K|ΔLk) were calculated from the simulated distributions of P(ΔLk|K) and of P(K). The calculated distributions agreed with previous experimental and theoretical results and greatly advanced on them. Our major focus was on P(K|ΔLk), the distribution of knot types for a particular value of ΔLk, which had not been evaluated previously. We found that unknotted circular DNA is not the most probable state beyond small values of ΔLk. Highly chiral knotted DNA has a lower free energy because it has less torsional deformation. Surprisingly, even at |ΔLk| > 12, only one or two knot types dominate the P(K|ΔLk) distribution despite the huge number of knots of comparable complexity. A large fraction of the knots found belong to the small family of torus knots. The relationship between supercoiling and knotting in vivo is discussed.
Resumo:
Structural genomics aims to solve a large number of protein structures that represent the protein space. Currently an exhaustive solution for all structures seems prohibitively expensive, so the challenge is to define a relatively small set of proteins with new, currently unknown folds. This paper presents a method that assigns each protein with a probability of having an unsolved fold. The method makes extensive use of protomap, a sequence-based classification, and scop, a structure-based classification. According to protomap, the protein space encodes the relationship among proteins as a graph whose vertices correspond to 13,354 clusters of proteins. A representative fold for a cluster with at least one solved protein is determined after superposition of all scop (release 1.37) folds onto protomap clusters. Distances within the protomap graph are computed from each representative fold to the neighboring folds. The distribution of these distances is used to create a statistical model for distances among those folds that are already known and those that have yet to be discovered. The distribution of distances for solved/unsolved proteins is significantly different. This difference makes it possible to use Bayes' rule to derive a statistical estimate that any protein has a yet undetermined fold. Proteins that score the highest probability to represent a new fold constitute the target list for structural determination. Our predicted probabilities for unsolved proteins correlate very well with the proportion of new folds among recently solved structures (new scop 1.39 records) that are disjoint from our original training set.
Resumo:
Recent studies have demonstrated the importance of recipient HLA-DRB1 allele disparity in the development of acute graft-versus-host disease (GVHD) after unrelated donor marrow transplantation. The role of HLA-DQB1 allele disparity in this clinical setting is unknown. To elucidate the biological importance of HLA-DQB1, we conducted a retrospective analysis of 449 HLA-A, -B, and -DR serologically matched unrelated donor transplants. Molecular typing of HLA-DRB1 and HLA-DQB1 alleles revealed 335 DRB1 and DQB1 matched pairs; 41 DRB1 matched and DQB1 mismatched pairs; 48 DRB1 mismatched and DQB1 matched pairs; and 25 DRB1 and DQB1 mismatched pairs. The conditional probabilities of grades III-IV acute GVHD were 0.42, 0.61, 0.55, and 0.71, respectively. The relative risk of acute GVHD associated with a single locus HLA-DQB1 mismatch was 1.8 (1.1, 2.7; P = 0.01), and the risk associated with any HLA-DQB1 and/or HLA-DRB1 mismatch was 1.6 (1.2, 2.2; P = 0.003). These results provide evidence that HLA-DQ is a transplant antigen and suggest that evaluation of both HLA-DQB1 and HLA-DRB1 is necessary in selecting potential donors.
Resumo:
Plasma processing is a standard industrial method for the modification of material surfaces and the deposition of thin films. Polyatomic ions and neutrals larger than a triatomic play a critical role in plasma-induced surface chemistry, especially in the deposition of polymeric films from fluorocarbon plasmas. In this paper, low energy CF3+ and C3F5+ ions are used to modify a polystyrene surface. Experimental and computational studies are combined to quantify the effect of the unique chemistry and structure of the incident ions on the result of ion-polymer collisions. C3F5+ ions are more effective at growing films than CF3+, both at similar energy/atom of ≈6 eV/atom and similar total kinetic energies of 25 and 50 eV. The composition of the films grown experimentally also varies with both the structure and kinetic energy of the incident ion. Both C3F5+ and CF3+ should be thought of as covalently bound polyatomic precursors or fragments that can react and become incorporated within the polystyrene surface, rather than merely donating F atoms. The size and structure of the ions affect polymer film formation via differing chemical structure, reactivity, sticking probabilities, and energy transfer to the surface. The different reactivity of these two ions with the polymer surface supports the argument that larger species contribute to the deposition of polymeric films from fluorocarbon plasmas. These results indicate that complete understanding and accurate computer modeling of plasma–surface modification requires accurate measurement of the identities, number densities, and kinetic energies of higher mass ions and energetic neutrals.
Resumo:
The availability of complete genome sequences and mRNA expression data for all genes creates new opportunities and challenges for identifying DNA sequence motifs that control gene expression. An algorithm, “MobyDick,” is presented that decomposes a set of DNA sequences into the most probable dictionary of motifs or words. This method is applicable to any set of DNA sequences: for example, all upstream regions in a genome or all genes expressed under certain conditions. Identification of words is based on a probabilistic segmentation model in which the significance of longer words is deduced from the frequency of shorter ones of various lengths, eliminating the need for a separate set of reference data to define probabilities. We have built a dictionary with 1,200 words for the 6,000 upstream regulatory regions in the yeast genome; the 500 most significant words (some with as few as 10 copies in all of the upstream regions) match 114 of 443 experimentally determined sites (a significance level of 18 standard deviations). When analyzing all of the genes up-regulated during sporulation as a group, we find many motifs in addition to the few previously identified by analyzing the subclusters individually to the expression subclusters. Applying MobyDick to the genes derepressed when the general repressor Tup1 is deleted, we find known as well as putative binding sites for its regulatory partners.
Resumo:
Single-stranded regions in RNA secondary structure are important for RNA–RNA and RNA–protein interactions. We present a probability profile approach for the prediction of these regions based on a statistical algorithm for sampling RNA secondary structures. For the prediction of phylogenetically-determined single-stranded regions in secondary structures of representative RNA sequences, the probability profile offers substantial improvement over the minimum free energy structure. In designing antisense oligonucleotides, a practical problem is how to select a secondary structure for the target mRNA from the optimal structure(s) and many suboptimal structures with similar free energies. By summarizing the information from a statistical sample of probable secondary structures in a single plot, the probability profile not only presents a solution to this dilemma, but also reveals ‘well-determined’ single-stranded regions through the assignment of probabilities as measures of confidence in predictions. In antisense application to the rabbit β-globin mRNA, a significant correlation between hybridization potential predicted by the probability profile and the degree of inhibition of in vitro translation suggests that the probability profile approach is valuable for the identification of effective antisense target sites. Coupling computational design with DNA–RNA array technique provides a rational, efficient framework for antisense oligonucleotide screening. This framework has the potential for high-throughput applications to functional genomics and drug target validation.
Resumo:
Tranformed-rule up and down psychophysical methods have gained great popularity, mainly because they combine criterion-free responses with an adaptive procedure allowing rapid determination of an average stimulus threshold at various criterion levels of correct responses. The statistical theory underlying the methods now in routine use is based on sets of consecutive responses with assumed constant probabilities of occurrence. The response rules requiring consecutive responses prevent the possibility of using the most desirable response criterion, that of 75% correct responses. The earliest transformed-rule up and down method, whose rules included nonconsecutive responses, did not contain this limitation but failed to become generally accepted, lacking a published theoretical foundation. Such a foundation is provided in this article and is validated empirically with the help of experiments on human subjects and a computer simulation. In addition to allowing the criterion of 75% correct responses, the method is more efficient than the methods excluding nonconsecutive responses in their rules.
Resumo:
We have studied the HA1 domain of 254 human influenza A(H3N2) virus genes for clues that might help identify characteristics of hemagglutinins (HAs) of circulating strains that are predictive of that strain’s epidemic potential. Our preliminary findings include the following. (i) The most parsimonious tree found requires 1,260 substitutions of which 712 are silent and 548 are replacement substitutions. (ii) The HA1 portion of the HA gene is evolving at a rate of 5.7 nucleotide substitutions/year or 5.7 × 10−3 substitutions/site per year. (iii) The replacement substitutions are distributed randomly across the three positions of the codon when allowance is made for the number of ways each codon can change the encoded amino acid. (iv) The replacement substitutions are not distributed randomly over the branches of the tree, there being 2.2 times more changes per tip branch than for non-tip branches. This result is independent of how the virus was amplified (egg grown or kidney cell grown) prior to sequencing or if sequencing was carried out directly on the original clinical specimen by PCR. (v) These excess changes on the tip branches are probably the result of a bias in the choice of strains to sequence and the detection of deleterious mutations that had not yet been removed by negative selection. (vi) There are six hypervariable codons accumulating replacement substitutions at an average rate that is 7.2 times that of the other varied codons. (vii) The number of variable codons in the trunk branches (the winners of the competitive race against the immune system) is 47 ± 5, significantly fewer than in the twigs (90 ± 7), which in turn is significantly fewer variable codons than in tip branches (175 ± 8). (viii) A minimum of one of every 12 branches has nodes at opposite ends representing viruses that reside on different continents. This is, however, no more than would be expected if one were to randomly reassign the continent of origin of the isolates. (ix) Of 99 codons with at least four mutations, 31 have ratios of non-silent to silent changes with probabilities less than 0.05 of occurring by chance, and 14 of those have probabilities <0.005. These observations strongly support positive Darwinian selection. We suggest that the small number of variable positions along the successful trunk lineage, together with knowledge of the codons that have shown positive selection, may provide clues that permit an improved prediction of which strains will cause epidemics and therefore should be used for vaccine production.
Resumo:
Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.
Resumo:
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.