959 resultados para pretest probability
Resumo:
This study focuses on the experiences of 91 Grade 4 students who had been introduced to expectation and variation through trials of tossing a single coin many times. They were then given two coins to toss simultaneously and asked to state their expectation of the chances for the possible outcomes, in a similar manner expressed for a single coin. This paper documents the journey of the students in discovering that generally their initial expectation for two coins was incorrect and that despite variation, a large number of tosses could confirm a new expectation.
Resumo:
By the time students reach the middle years they have experienced many chance activities based on dice. Common among these are rolling one die to explore the relationship of frequency and theoretical probability, and rolling two dice and summing the outcomes to consider their probabilities. Although dice may be considered overused by some, the advantage they offer is a familiar context within which to explore much more complex concepts. If the basic chance mechanism of the device is understood, it is possible to enter quickly into an arena of more complex concepts. This is what happened with a two hour activity engaged in by four classes of Grade 6 students in the same school. The activity targeted the concepts of variation and expectation. The teachers held extended discussions with their classes on variation and expectation at the beginning of the activity, with students contributing examples of the two concepts from their own experience. These notions are quite sophisticated for Grade 6, but the underlying concepts describe phenomena that students encounter every day. For example, time varies continuously; sporting results vary from game to game; the maximum temperature varies from day to day. However, there is an expectation about tomorrow’s maximum temperature based on the expert advice from the weather bureau. There may also be an expectation about a sporting result based on the participants’ previous results. It is this juxtaposition that makes life interesting. Variation hence describes the differences we see in phenomena around us. In a scenario displaying variation, expectation describes the effort to characterise or summarise the variation and perhaps make a prediction about the message arising from the scenario. The explicit purpose of the activity described here was to use the familiar scenario of rolling a die to expose these two concepts. Because the students had previously experienced rolling physical dice they knew instinctively about the variation that occurs across many rolls and about the theoretical expectation that each side should “come up” one-sixth of the time. They had observed the instances of the concepts in action, but had not consolidated the underlying terminology to describe it. As the two concepts are so fundamental to understanding statistics, we felt it would be useful to begin building in the familiar environment of rolling a die. Because hand-held dice limit the explorations students can undertake, the classes used the soft-ware TinkerPlots (Konold & Miller, 2011) to simulate rolling a die multiple times.
Resumo:
Background Genetic testing is recommended when the probability of a disease-associated germline mutation exceeds 10%. Germline mutations are found in approximately 25% of individuals with phaeochromcytoma (PCC) or paraganglioma (PGL); however, genetic heterogeneity for PCC/PGL means many genes may require sequencing. A phenotype-directed iterative approach may limit costs but may also delay diagnosis, and will not detect mutations in genes not previously associated with PCC/PGL. Objective To assess whether whole exome sequencing (WES) was efficient and sensitive for mutation detection in PCC/PGL. Methods Whole exome sequencing was performed on blinded samples from eleven individuals with PCC/PGL and known mutations. Illumina TruSeq™ (Illumina Inc, San Diego, CA, USA) was used for exome capture of seven samples, and NimbleGen SeqCap EZ v3.0 (Roche NimbleGen Inc, Basel, Switzerland) for five samples (one sample was repeated). Massive parallel sequencing was performed on multiplexed samples. Sequencing data were called using Genome Analysis Toolkit and annotated using annovar. Data were assessed for coding variants in RET, NF1, VHL, SDHD, SDHB, SDHC, SDHA, SDHAF2, KIF1B, TMEM127, EGLN1 and MAX. Target capture of five exome capture platforms was compared. Results Six of seven mutations were detected using Illumina TruSeq™ exome capture. All five mutations were detected using NimbleGen SeqCap EZ v3.0 platform, including the mutation missed using Illumina TruSeq™ capture. Target capture for exons in known PCC/PGL genes differs substantially between platforms. Exome sequencing was inexpensive (<$A800 per sample for reagents) and rapid (results <5 weeks from sample reception). Conclusion Whole exome sequencing is sensitive, rapid and efficient for detection of PCC/PGL germline mutations. However, capture platform selection is critical to maximize sensitivity.
Resumo:
Japanese encephalitis (JE) is the most common cause of viral encephalitis and an important public health concern in the Asia-Pacific region, particularly in China where 50% of global cases are notified. To explore the association between environmental factors and human JE cases and identify the high risk areas for JE transmission in China, we used annual notified data on JE cases at the center of administrative township and environmental variables with a pixel resolution of 1 km×1 km from 2005 to 2011 to construct models using ecological niche modeling (ENM) approaches based on maximum entropy. These models were then validated by overlaying reported human JE case localities from 2006 to 2012 onto each prediction map. ENMs had good discriminatory ability with the area under the curve (AUC) of the receiver operating curve (ROC) of 0.82-0.91, and low extrinsic omission rate of 5.44-7.42%. Resulting maps showed JE being presented extensively throughout southwestern and central China, with local spatial variations in probability influenced by minimum temperatures, human population density, mean temperatures, and elevation, with contribution of 17.94%-38.37%, 15.47%-21.82%, 3.86%-21.22%, and 12.05%-16.02%, respectively. Approximately 60% of JE cases occurred in predicted high risk areas, which covered less than 6% of areas in mainland China. Our findings will help inform optimal geographical allocation of the limited resources available for JE prevention and control in China, find hidden high-risk areas, and increase the effectiveness of public health interventions against JE transmission.
Resumo:
Spontaneous emission (SE) of a Quantum emitter depends mainly on the transmission strength between the upper and lower energy levels as well as the Local Density of States (LDOS)[1]. When a QD is placed in near a plasmon waveguide, LDOS of the QD is increased due to addition of the non-radiative decay and a plasmonic decay channel to free space emission[2-4]. The slow velocity and dramatic concentration of the electric field of the plasmon can capture majority of the SE into guided plasmon mode (Гpl ). This paper focused on studying the effect of waveguide height on the efficiency of coupling QD decay into plasmon mode using a numerical model based on finite elemental method (FEM). Symmetric gap waveguide considered in this paper support single mode and QD as a dipole emitter. 2D simulation models are done to find normalized Гpl and 3D models are used to find probability of SE decaying into plasmon mode ( β) including all three decay channels. It is found out that changing gap height can increase QD-plasmon coupling, by up to a factor of 5 and optimally placed QD up to a factor of 8. To make the paper more realistic we briefly studied the effect of sharpness of the waveguide edge on SE emission into guided plasmon mode. Preliminary nano gap waveguide fabrication and testing are already underway. Authors expect to compare the theoretical results with experimental outcomes in the future
Resumo:
Messenger RNAs (mRNAs) can be repressed and degraded by small non-coding RNA molecules. In this paper, we formulate a coarsegrained Markov-chain description of the post-transcriptional regulation of mRNAs by either small interfering RNAs (siRNAs) or microRNAs (miRNAs). We calculate the probability of an mRNA escaping from its domain before it is repressed by siRNAs/miRNAs via cal- culation of the mean time to threshold: when the number of bound siRNAs/miRNAs exceeds a certain threshold value, the mRNA is irreversibly repressed. In some cases,the analysis can be reduced to counting certain paths in a reduced Markov model. We obtain explicit expressions when the small RNA bind irreversibly to the mRNA and we also discuss the reversible binding case. We apply our models to the study of RNA interference in the nucleus, examining the probability of mRNAs escaping via small nuclear pores before being degraded by siRNAs. Using the same modelling framework, we further investigate the effect of small, decoy RNAs (decoys) on the process of post-transcriptional regulation, by studying regulation of the tumor suppressor gene, PTEN : decoys are able to block binding sites on PTEN mRNAs, thereby educing the number of sites available to siRNAs/miRNAs and helping to protect it from repression. We calculate the probability of a cytoplasmic PTEN mRNA translocating to the endoplasmic reticulum before being repressed by miRNAs. We support our results with stochastic simulations
Resumo:
In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation-based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expert’ system.
Resumo:
Change point estimation is recognized as an essential tool of root cause analyses within quality control programs as it enables clinical experts to search for potential causes of change in hospital outcomes more effectively. In this paper, we consider estimation of the time when a linear trend disturbance has occurred in survival time following an in-control clinical intervention in the presence of variable patient mix. To model the process and change point, a linear trend in the survival time of patients who underwent cardiac surgery is formulated using hierarchical models in a Bayesian framework. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. We use Markov Chain Monte Carlo to obtain posterior distributions of the change point parameters including the location and the slope size of the trend and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time cumulative sum control chart (CUSUM) control charts for different trend scenarios. In comparison with the alternatives, step change point model and built-in CUSUM estimator, more accurate and precise estimates are obtained by the proposed Bayesian estimator over linear trends. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
Derailments are a significant cost to the Australian sugar industry with damage to rail infrastructure and rolling stock in excess of $2 M per annum. Many factors can contribute to cane rail derailments. The more prevalent factors are discussed. Derailment statistics on likely causes for cane rail derailments are presented with the case of empty wagons on the main line being the highest contributor to business cost. Historically, the lateral to vertical wheel load ratio, termed the derailment ratio, has been used to indicate the derailment probability of rolling stock. When the derailment ratio reaches the Nadal limit of 0.81 for cane rail operations, there is a high probability that a derailment will occur. Contributing factors for derailments include the operating forces, the geometric variables of the rolling stock and the geometric deviations of the railway track. These combined, have the capacity to affect the risk of derailment for a cane rail transport operating system. The derailment type that is responsible for creating the most damage to assets and creating mill stops is the flange climb derailment, as these derailments usually occur at speed with a full rake of empty wagons. The typical forces that contribute to the flange climb derailment case for cane rail operations are analysed and a practical derailment model is developed to enable operators to better appreciate the most significant contributing factors to this type of derailment. The paper aims to: (a) improve awareness of the significance of physical operating parameters so that these principles can be included in locomotive driver training and (b) improve awareness of track and wagon variables related to the risk of derailment so that maintainers of the rail system can allocate funds for maintenance more effectively.
Resumo:
In continuum one-dimensional space, a coupled directed continuous time random walk model is proposed, where the random walker jumps toward one direction and the waiting time between jumps affects the subsequent jump. In the proposed model, the Laplace-Laplace transform of the probability density function P(x,t) of finding the walker at position at time is completely determined by the Laplace transform of the probability density function φ(t) of the waiting time. In terms of the probability density function of the waiting time in the Laplace domain, the limit distribution of the random process and the corresponding evolving equations are derived.
Resumo:
This paper presents stylized models for conducting performance analysis of the manufacturing supply chain network (SCN) in a stochastic setting for batch ordering. We use queueing models to capture the behavior of SCN. The analysis is clubbed with an inventory optimization model, which can be used for designing inventory policies . In the first case, we model one manufacturer with one warehouse, which supplies to various retailers. We determine the optimal inventory level at the warehouse that minimizes total expected cost of carrying inventory, back order cost associated with serving orders in the backlog queue, and ordering cost. In the second model we impose service level constraint in terms of fill rate (probability an order is filled from stock at warehouse), assuming that customers do not balk from the system. We present several numerical examples to illustrate the model and to illustrate its various features. In the third case, we extend the model to a three-echelon inventory model which explicitly considers the logistics process.
Resumo:
This article presents the results of probabilistic seismic hazard analysis (PSHA) for Bangalore, South India. Analyses have been carried out considering the seismotectonic parameters of the region covering a radius of 350 km keeping Bangalore as the center. Seismic hazard parameter `b' has been evaluated considering the available earthquake data using (1) Gutenberg-Richter (G-R) relationship and (2) Kijko and Sellevoll (1989, 1992) method utilizing extreme and complete catalogs. The `b' parameter was estimated to be 0.62 to 0.98 from G-R relation and 0.87 +/- A 0.03 from Kijko and Sellevoll method. The results obtained are a little higher than the `b' values published earlier for southern India. Further, probabilistic seismic hazard analysis for Bangalore region has been carried out considering six seismogenic sources. From the analysis, mean annual rate of exceedance and cumulative probability hazard curve for peak ground acceleration (PGA) and spectral acceleration (Sa) have been generated. The quantified hazard values in terms of the rock level peak ground acceleration (PGA) are mapped for 10% probability of exceedance in 50 years on a grid size of 0.5 km x 0.5 km. In addition, Uniform Hazard Response Spectrum (UHRS) at rock level is also developed for the 5% damping corresponding to 10% probability of exceedance in 50 years. The peak ground acceleration (PGA) value of 0.121 g obtained from the present investigation is slightly lower (but comparable) than the PGA values obtained from the deterministic seismic hazard analysis (DSHA) for the same area. However, the PGA value obtained in the current investigation is higher than PGA values reported in the global seismic hazard assessment program (GSHAP) maps of Bhatia et al. (1999) for the shield area.
Resumo:
Using surface charts at 0330GMT, the movement df the monsoon trough during the months June to September 1990 al two fixed longitudes, namely 79 degrees E and 85 degrees E, is studied. The probability distribution of trough position shows that the median, mean and mode occur at progressively more northern latitudes, especially at 85 degrees E, with a pronounced mode that is close to the northern-most limit reached by the trough. A spectral analysis of the fluctuating latitudinal position of the trough is carried out using FFT and the Maximum Entropy Method (MEM). Both methods show significant peaks around 7.5 and 2.6 days, and a less significant one around 40-50 days. The two peaks at the shorter period are more prominent at the eastern longitude. MEM shows an additional peak around 15 days. A study of the weather systems that occurred during the season shows them to have a duration around 3 days and an interval between systems of around 9 days, suggesting a possible correlation with the dominant short periods observed in the spectrum of trough position.
Resumo:
In this paper, we first recast the generalized symmetric eigenvalue problem, where the underlying matrix pencil consists of symmetric positive definite matrices, into an unconstrained minimization problem by constructing an appropriate cost function, We then extend it to the case of multiple eigenvectors using an inflation technique, Based on this asymptotic formulation, we derive a quasi-Newton-based adaptive algorithm for estimating the required generalized eigenvectors in the data case. The resulting algorithm is modular and parallel, and it is globally convergent with probability one, We also analyze the effect of inexact inflation on the convergence of this algorithm and that of inexact knowledge of one of the matrices (in the pencil) on the resulting eigenstructure. Simulation results demonstrate that the performance of this algorithm is almost identical to that of the rank-one updating algorithm of Karasalo. Further, the performance of the proposed algorithm has been found to remain stable even over 1 million updates without suffering from any error accumulation problems.
Resumo:
This paper presents a chance-constrained linear programming formulation for reservoir operation of a multipurpose reservoir. The release policy is defined by a chance constraint that the probability of irrigation release in any period equalling or exceeding the irrigation demand is at least equal to a specified value P (called reliability level). The model determines the maximum annual hydropower produced while meeting the irrigation demand at a specified reliability level. The model considers variation in reservoir water level elevation and also the operating range within which the turbine operates. A linear approximation for nonlinear power production function is assumed and the solution obtained within a specified tolerance limit. The inflow into the reservoir is considered random. The chance constraint is converted into its deterministic equivalent using a linear decision rule and inflow probability distribution. The model application is demonstrated through a case study.