528 resultados para PROBABILITY


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective National guidelines for management of intermediate risk patients with suspected acute coronary syndrome, in whom AMI has been excluded, advocate provocative testing to final risk stratify these patients into low risk (negative testing) or high risk (positive testing suggestive of unstable angina). Adults less than 40 years have a low pretest probability of acute coronary syndrome. The utility of exercise stress testing in young adults with chest pain suspected of acute coronary syndrome who have National Heart Foundation intermediate risk features was evaluated Methods A retrospective analysis of exercise stress testing performed on patients less than 40 years was evaluated. Patients were enrolled on a chest pain pathway and had negative serial ECGs and cardiac biomarkers before exercise stress testing to rule-out acute coronary syndrome. Chart review was completed on patients with positive stress tests. Results The 3987 patients with suspected intermediate risk acute coronary syndrome underwent exercise stress testing. One thousand and twenty-seven (25.8%) were aged less than 40 years (age 33.3 ± 4.8 years). Four of these 1027 patients had a positive exercise stress test (0.4% incidence of positive exercise stress testing). Of those, three patients had subsequent non-invasive functional testing that yielded a negative result. One patient declined further investigations. Assuming this was a true positive exercise stress test, the incidence of true positive exercise stress testing would have been 0.097% (95% confidence interval: 0.079–0.115%) (one of 1027 patients). Conclusions Routine exercise stress testing has limited value in the risk stratification of adults less than 40 years with suspected intermediate risk of acute coronary syndrome

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spontaneous emission (SE) of a Quantum emitter depends mainly on the transmission strength between the upper and lower energy levels as well as the Local Density of States (LDOS)[1]. When a QD is placed in near a plasmon waveguide, LDOS of the QD is increased due to addition of the non-radiative decay and a plasmonic decay channel to free space emission[2-4]. The slow velocity and dramatic concentration of the electric field of the plasmon can capture majority of the SE into guided plasmon mode (Гpl ). This paper focused on studying the effect of waveguide height on the efficiency of coupling QD decay into plasmon mode using a numerical model based on finite elemental method (FEM). Symmetric gap waveguide considered in this paper support single mode and QD as a dipole emitter. 2D simulation models are done to find normalized Гpl and 3D models are used to find probability of SE decaying into plasmon mode ( β) including all three decay channels. It is found out that changing gap height can increase QD-plasmon coupling, by up to a factor of 5 and optimally placed QD up to a factor of 8. To make the paper more realistic we briefly studied the effect of sharpness of the waveguide edge on SE emission into guided plasmon mode. Preliminary nano gap waveguide fabrication and testing are already underway. Authors expect to compare the theoretical results with experimental outcomes in the future

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Messenger RNAs (mRNAs) can be repressed and degraded by small non-coding RNA molecules. In this paper, we formulate a coarsegrained Markov-chain description of the post-transcriptional regulation of mRNAs by either small interfering RNAs (siRNAs) or microRNAs (miRNAs). We calculate the probability of an mRNA escaping from its domain before it is repressed by siRNAs/miRNAs via cal- culation of the mean time to threshold: when the number of bound siRNAs/miRNAs exceeds a certain threshold value, the mRNA is irreversibly repressed. In some cases,the analysis can be reduced to counting certain paths in a reduced Markov model. We obtain explicit expressions when the small RNA bind irreversibly to the mRNA and we also discuss the reversible binding case. We apply our models to the study of RNA interference in the nucleus, examining the probability of mRNAs escaping via small nuclear pores before being degraded by siRNAs. Using the same modelling framework, we further investigate the effect of small, decoy RNAs (decoys) on the process of post-transcriptional regulation, by studying regulation of the tumor suppressor gene, PTEN : decoys are able to block binding sites on PTEN mRNAs, thereby educing the number of sites available to siRNAs/miRNAs and helping to protect it from repression. We calculate the probability of a cytoplasmic PTEN mRNA translocating to the endoplasmic reticulum before being repressed by miRNAs. We support our results with stochastic simulations

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation-based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expert’ system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Change point estimation is recognized as an essential tool of root cause analyses within quality control programs as it enables clinical experts to search for potential causes of change in hospital outcomes more effectively. In this paper, we consider estimation of the time when a linear trend disturbance has occurred in survival time following an in-control clinical intervention in the presence of variable patient mix. To model the process and change point, a linear trend in the survival time of patients who underwent cardiac surgery is formulated using hierarchical models in a Bayesian framework. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. We use Markov Chain Monte Carlo to obtain posterior distributions of the change point parameters including the location and the slope size of the trend and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time cumulative sum control chart (CUSUM) control charts for different trend scenarios. In comparison with the alternatives, step change point model and built-in CUSUM estimator, more accurate and precise estimates are obtained by the proposed Bayesian estimator over linear trends. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Derailments are a significant cost to the Australian sugar industry with damage to rail infrastructure and rolling stock in excess of $2 M per annum. Many factors can contribute to cane rail derailments. The more prevalent factors are discussed. Derailment statistics on likely causes for cane rail derailments are presented with the case of empty wagons on the main line being the highest contributor to business cost. Historically, the lateral to vertical wheel load ratio, termed the derailment ratio, has been used to indicate the derailment probability of rolling stock. When the derailment ratio reaches the Nadal limit of 0.81 for cane rail operations, there is a high probability that a derailment will occur. Contributing factors for derailments include the operating forces, the geometric variables of the rolling stock and the geometric deviations of the railway track. These combined, have the capacity to affect the risk of derailment for a cane rail transport operating system. The derailment type that is responsible for creating the most damage to assets and creating mill stops is the flange climb derailment, as these derailments usually occur at speed with a full rake of empty wagons. The typical forces that contribute to the flange climb derailment case for cane rail operations are analysed and a practical derailment model is developed to enable operators to better appreciate the most significant contributing factors to this type of derailment. The paper aims to: (a) improve awareness of the significance of physical operating parameters so that these principles can be included in locomotive driver training and (b) improve awareness of track and wagon variables related to the risk of derailment so that maintainers of the rail system can allocate funds for maintenance more effectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In continuum one-dimensional space, a coupled directed continuous time random walk model is proposed, where the random walker jumps toward one direction and the waiting time between jumps affects the subsequent jump. In the proposed model, the Laplace-Laplace transform of the probability density function P(x,t) of finding the walker at position at time is completely determined by the Laplace transform of the probability density function φ(t) of the waiting time. In terms of the probability density function of the waiting time in the Laplace domain, the limit distribution of the random process and the corresponding evolving equations are derived.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Poor compliance with speed limits is a serious safety concern at roadworks. While considerable research has been undertaken worldwide to understand drivers’ speeding behaviour at roadworks and to identify treatments for improving compliance with speed limits, little is known about the speeding behaviour of drivers at Australian roadworks and how their compliance rates with speed limits could be improved. This paper presents findings from two Queensland studies targeted at 1) examining drivers’ speed profiles at three long-term roadwork sites, and 2) understanding the effectiveness of speed control treatments at roadworks. The first study analysed driver speeds at various locations in the sites using a Tobit regression model. Results show that the probability of speeding was higher for light vehicles and their followers, for leaders of platoons with larger front gaps, during late afternoon and early morning, when higher proportions of surrounding vehicles were speeding, and at the upstream of work areas. The second study provided a comprehensive understanding of the effectiveness of various speed control treatments used at roadworks by undertaking a critical review of the literature. Results showed that enforcement has the greatest effects on reducing speeds among all treatments, while the roadwork signage and information-related treatments have small to moderate effects on speed reduction. Findings from the studies have potential for designing programs to effectively improve speed limit compliance at Australian roadworks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ship seakeeping operability refers to the quantification of motion performance in waves relative to mission requirements. This is used to make decisions about preferred vessel designs, but it can also be used as comprehensive assessment of the benefits of ship-motion-control systems. Traditionally, operability computation aggregates statistics of motion computed over over the envelope of likely environmental conditions in order to determine a coefficient in the range from 0 to 1 called operability. When used for assessment of motion-control systems, the increase of operability is taken as the key performance indicator. The operability coefficient is often given the interpretation of the percentage of time operable. This paper considers an alternative probabilistic approach to this traditional computation of operability. It characterises operability not as a number to which a frequency interpretation is attached, but as a hypothesis that a vessel will attain the desired performance in one mission considering the envelope of likely operational conditions. This enables the use of Bayesian theory to compute the probability of that this hypothesis is true conditional on data from simulations. Thus, the metric considered is the probability of operability. This formulation not only adheres to recent developments in reliability and risk analysis, but also allows incorporating into the analysis more accurate descriptions of ship-motion-control systems since the analysis is not limited to linear ship responses in the frequency domain. The paper also discusses an extension of the approach to the case of assessment of increased levels of autonomy for unmanned marine craft.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The contemporary methodology for growth models of organisms is based on continuous trajectories and thus it hinders us from modelling stepwise growth in crustacean populations. Growth models for fish are normally assumed to follow a continuous function, but a different type of model is needed for crustacean growth. Crustaceans must moult in order for them to grow. The growth of crustaceans is a discontinuous process due to the periodical shedding of the exoskeleton in moulting. The stepwise growth of crustaceans through the moulting process makes the growth estimation more complex. Stochastic approaches can be used to model discontinuous growth or what are commonly known as "jumps" (Figure 1). However, in stochastic growth model we need to ensure that the stochastic growth model results in only positive jumps. In view of this, we will introduce a subordinator that is a special case of a Levy process. A subordinator is a non-decreasing Levy process, that will assist in modelling crustacean growth for better understanding of the individual variability and stochasticity in moulting periods and increments. We develop the estimation methods for parameter estimation and illustrate them with the help of a dataset from laboratory experiments. The motivational dataset is from the ornate rock lobster, Panulirus ornatus, which can be found between Australia and Papua New Guinea. Due to the presence of sex effects on the growth (Munday et al., 2004), we estimate the growth parameters separately for each sex. Since all hard parts are shed too often, the exact age determination of a lobster can be challenging. However, the growth parameters for the aforementioned moult processes from tank data being able to estimate through: (i) inter-moult periods, and (ii) moult increment. We will attempt to derive a joint density, which is made up of two functions: one for moult increments and the other for time intervals between moults. We claim these functions are conditionally independent given pre-moult length and the inter-moult periods. The variables moult increments and inter-moult periods are said to be independent because of the Markov property or conditional probability. Hence, the parameters in each function can be estimated separately. Subsequently, we integrate both of the functions through a Monte Carlo method. We can therefore obtain a population mean for crustacean growth (e. g. red curve in Figure 1). [GRAPHICS]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the one-sample sign test for data obtained from general ranked set sampling when the number of observations for each rank are not necessarily the same, and proposes a weighted sign test because observations with different ranks are not identically distributed. The optimal weight for each observation is distribution free and only depends on its associated rank. It is shown analytically that (1) the weighted version always improves the Pitman efficiency for all distributions; and (2) the optimal design is to select the median from each ranked set.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Bernoulli/exponential target process is considered. Such processes have been found useful in modelling the search for active compounds in pharmaceutical research. An inequality is presented which improves a result of Gittins (1989), thus providing a better approximation to the Gittins indices which define the optimal search policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose an iterative estimating equations procedure for analysis of longitudinal data. We show that, under very mild conditions, the probability that the procedure converges at an exponential rate tends to one as the sample size increases to infinity. Furthermore, we show that the limiting estimator is consistent and asymptotically efficient, as expected. The method applies to semiparametric regression models with unspecified covariances among the observations. In the special case of linear models, the procedure reduces to iterative reweighted least squares. Finite sample performance of the procedure is studied by simulations, and compared with other methods. A numerical example from a medical study is considered to illustrate the application of the method.