959 resultados para pretest probability


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Poor compliance with speed limits is a serious safety concern at roadworks. While considerable research has been undertaken worldwide to understand drivers’ speeding behaviour at roadworks and to identify treatments for improving compliance with speed limits, little is known about the speeding behaviour of drivers at Australian roadworks and how their compliance rates with speed limits could be improved. This paper presents findings from two Queensland studies targeted at 1) examining drivers’ speed profiles at three long-term roadwork sites, and 2) understanding the effectiveness of speed control treatments at roadworks. The first study analysed driver speeds at various locations in the sites using a Tobit regression model. Results show that the probability of speeding was higher for light vehicles and their followers, for leaders of platoons with larger front gaps, during late afternoon and early morning, when higher proportions of surrounding vehicles were speeding, and at the upstream of work areas. The second study provided a comprehensive understanding of the effectiveness of various speed control treatments used at roadworks by undertaking a critical review of the literature. Results showed that enforcement has the greatest effects on reducing speeds among all treatments, while the roadwork signage and information-related treatments have small to moderate effects on speed reduction. Findings from the studies have potential for designing programs to effectively improve speed limit compliance at Australian roadworks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I (Manjunath et al., 1994, Chem. Engng Sci. 49, 1451-1463) of this paper showed that the random particle numbers and size distributions in precipitation processes in very small drops obtained by stochastic simulation techniques deviate substantially from the predictions of conventional population balance. The foregoing problem is considered in this paper in terms of a mean field approximation obtained by applying a first-order closure to an unclosed set of mean field equations presented in Part I. The mean field approximation consists of two mutually coupled partial differential equations featuring (i) the probability distribution for residual supersaturation and (ii) the mean number density of particles for each size and supersaturation from which all average properties and fluctuations can be calculated. The mean field equations have been solved by finite difference methods for (i) crystallization and (ii) precipitation of a metal hydroxide both occurring in a single drop of specified initial supersaturation. The results for the average number of particles, average residual supersaturation, the average size distribution, and fluctuations about the average values have been compared with those obtained by stochastic simulation techniques and by population balance. This comparison shows that the mean field predictions are substantially superior to those of population balance as judged by the close proximity of results from the former to those from stochastic simulations. The agreement is excellent for broad initial supersaturations at short times but deteriorates progressively at larger times. For steep initial supersaturation distributions, predictions of the mean field theory are not satisfactory thus calling for higher-order approximations. The merit of the mean field approximation over stochastic simulation lies in its potential to reduce expensive computation times involved in simulation. More effective computational techniques could not only enhance this advantage of the mean field approximation but also make it possible to use higher-order approximations eliminating the constraints under which the stochastic dynamics of the process can be predicted accurately.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The random early detection (RED) technique has seen a lot of research over the years. However, the functional relationship between RED performance and its parameters viz,, queue weight (omega(q)), marking probability (max(p)), minimum threshold (min(th)) and maximum threshold (max(th)) is not analytically availa ble. In this paper, we formulate a probabilistic constrained optimization problem by assuming a nonlinear relationship between the RED average queue length and its parameters. This problem involves all the RED parameters as the variables of the optimization problem. We use the barrier and the penalty function approaches for its Solution. However (as above), the exact functional relationship between the barrier and penalty objective functions and the optimization variable is not known, but noisy samples of these are available for different parameter values. Thus, for obtaining the gradient and Hessian of the objective, we use certain recently developed simultaneous perturbation stochastic approximation (SPSA) based estimates of these. We propose two four-timescale stochastic approximation algorithms based oil certain modified second-order SPSA updates for finding the optimum RED parameters. We present the results of detailed simulation experiments conducted over different network topologies and network/traffic conditions/settings, comparing the performance of Our algorithms with variants of RED and a few other well known adaptive queue management (AQM) techniques discussed in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report an experimental study of a new type of turbulent flow that is driven purely by buoyancy. The flow is due to an unstable density difference, created using brine and water, across the ends of a long (length/diameter = 9) vertical pipe. The Schmidt number Sc is 670, and the Rayleigh number (Ra) based on the density gradient and diameter is about 10(8). Under these conditions the convection is turbulent, and the time-averaged velocity at any point is `zero'. The Reynolds number based on the Taylor microscale, Re-lambda, is about 65. The pipe is long enough for there to be an axially homogeneous region, with a linear density gradient, about 6-7 diameters long in the midlength of the pipe. In the absence of a mean flow and, therefore, mean shear, turbulence is sustained just by buoyancy. The flow can be thus considered to be an axially homogeneous turbulent natural convection driven by a constant (unstable) density gradient. We characterize the flow using flow visualization and particle image velocimetry (PIV). Measurements show that the mean velocities and the Reynolds shear stresses are zero across the cross-section; the root mean squared (r.m.s.) of the vertical velocity is larger than those of the lateral velocities (by about one and half times at the pipe axis). We identify some features of the turbulent flow using velocity correlation maps and the probability density functions of velocities and velocity differences. The flow away from the wall, affected mainly by buoyancy, consists of vertically moving fluid masses continually colliding and interacting, while the flow near the wall appears similar to that in wall-bound shear-free turbulence. The turbulence is anisotropic, with the anisotropy increasing to large values as the wall is approached. A mixing length model with the diameter of the pipe as the length scale predicts well the scalings for velocity fluctuations and the flux. This model implies that the Nusselt number would scale as (RaSc1/2)-Sc-1/2, and the Reynolds number would scale as (RaSc-1/2)-Sc-1/2. The velocity and the flux measurements appear to be consistent with the Ra-1/2 scaling, although it must be pointed out that the Rayleigh number range was less than 10. The Schmidt number was not varied to check the Sc scaling. The fluxes and the Reynolds numbers obtained in the present configuration are Much higher compared to what would be obtained in Rayleigh-Benard (R-B) convection for similar density differences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ship seakeeping operability refers to the quantification of motion performance in waves relative to mission requirements. This is used to make decisions about preferred vessel designs, but it can also be used as comprehensive assessment of the benefits of ship-motion-control systems. Traditionally, operability computation aggregates statistics of motion computed over over the envelope of likely environmental conditions in order to determine a coefficient in the range from 0 to 1 called operability. When used for assessment of motion-control systems, the increase of operability is taken as the key performance indicator. The operability coefficient is often given the interpretation of the percentage of time operable. This paper considers an alternative probabilistic approach to this traditional computation of operability. It characterises operability not as a number to which a frequency interpretation is attached, but as a hypothesis that a vessel will attain the desired performance in one mission considering the envelope of likely operational conditions. This enables the use of Bayesian theory to compute the probability of that this hypothesis is true conditional on data from simulations. Thus, the metric considered is the probability of operability. This formulation not only adheres to recent developments in reliability and risk analysis, but also allows incorporating into the analysis more accurate descriptions of ship-motion-control systems since the analysis is not limited to linear ship responses in the frequency domain. The paper also discusses an extension of the approach to the case of assessment of increased levels of autonomy for unmanned marine craft.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The probable modes of binding for methyl-α-d-sophoroside, methyl-β-d-sophoroside, laminariboise and cellobiose to concanavalin A have been determined using theoretical methods. Methyl-d-sophorosides can bind to concanavalin A in two modes, i.e. by placing their reducing as well as non-reducing sugar units in the carbohydrate specific binding site, whereas laminaribiose and cellobiose can reach the binding site only with their non-reducing glucose units. However, the probability for methyl-α-d-sophoroside to bind to concanavalin A with its reducing sugar residue as the occupant of the binding site is much higher than it is with its non-reducing sugar residue as the occupant of the sugar binding site. A few of the probable conformers of methyl-β-d-sophoroside can bind to concanavalin A with either the reducing or non-reducing glucose unit. Higher energy conformers of cellobiose or laminaribiose can reach the binding site with their non-reducing residues alone. The relative differences in the binding affinities of these disaccharides are mainly due to the differences in the availability of proper conformers which can reach the binding site and to non-covalent interactions between the sugar and the protein. This study also suggests that though the sugar binding site of concanavalin A accommodates a single sugar residue, the residue outwards from the binding site also interacts with concanavalin A, indicating the existence of extended concanavalin A carbohydrate interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The contemporary methodology for growth models of organisms is based on continuous trajectories and thus it hinders us from modelling stepwise growth in crustacean populations. Growth models for fish are normally assumed to follow a continuous function, but a different type of model is needed for crustacean growth. Crustaceans must moult in order for them to grow. The growth of crustaceans is a discontinuous process due to the periodical shedding of the exoskeleton in moulting. The stepwise growth of crustaceans through the moulting process makes the growth estimation more complex. Stochastic approaches can be used to model discontinuous growth or what are commonly known as "jumps" (Figure 1). However, in stochastic growth model we need to ensure that the stochastic growth model results in only positive jumps. In view of this, we will introduce a subordinator that is a special case of a Levy process. A subordinator is a non-decreasing Levy process, that will assist in modelling crustacean growth for better understanding of the individual variability and stochasticity in moulting periods and increments. We develop the estimation methods for parameter estimation and illustrate them with the help of a dataset from laboratory experiments. The motivational dataset is from the ornate rock lobster, Panulirus ornatus, which can be found between Australia and Papua New Guinea. Due to the presence of sex effects on the growth (Munday et al., 2004), we estimate the growth parameters separately for each sex. Since all hard parts are shed too often, the exact age determination of a lobster can be challenging. However, the growth parameters for the aforementioned moult processes from tank data being able to estimate through: (i) inter-moult periods, and (ii) moult increment. We will attempt to derive a joint density, which is made up of two functions: one for moult increments and the other for time intervals between moults. We claim these functions are conditionally independent given pre-moult length and the inter-moult periods. The variables moult increments and inter-moult periods are said to be independent because of the Markov property or conditional probability. Hence, the parameters in each function can be estimated separately. Subsequently, we integrate both of the functions through a Monte Carlo method. We can therefore obtain a population mean for crustacean growth (e. g. red curve in Figure 1). [GRAPHICS]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Formulation of quantum first passage problem is attempted in terms of a restricted Feynman path integral that simulates an absorbing barrier as in the corresponding classical case. The positivity of the resulting probability density, however, remains to be demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel method is proposed to treat the problem of the random resistance of a strictly one-dimensional conductor with static disorder. It is suggested, for the probability distribution of the transfer matrix of the conductor, the distribution of maximum information-entropy, constrained by the following physical requirements: 1) flux conservation, 2) time-reversal invariance and 3) scaling, with the length of the conductor, of the two lowest cumulants of ζ, where = sh2ζ. The preliminary results discussed in the text are in qualitative agreement with those obtained by sophisticated microscopic theories.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the one-sample sign test for data obtained from general ranked set sampling when the number of observations for each rank are not necessarily the same, and proposes a weighted sign test because observations with different ranks are not identically distributed. The optimal weight for each observation is distribution free and only depends on its associated rank. It is shown analytically that (1) the weighted version always improves the Pitman efficiency for all distributions; and (2) the optimal design is to select the median from each ranked set.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Bernoulli/exponential target process is considered. Such processes have been found useful in modelling the search for active compounds in pharmaceutical research. An inequality is presented which improves a result of Gittins (1989), thus providing a better approximation to the Gittins indices which define the optimal search policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose an iterative estimating equations procedure for analysis of longitudinal data. We show that, under very mild conditions, the probability that the procedure converges at an exponential rate tends to one as the sample size increases to infinity. Furthermore, we show that the limiting estimator is consistent and asymptotically efficient, as expected. The method applies to semiparametric regression models with unspecified covariances among the observations. In the special case of linear models, the procedure reduces to iterative reweighted least squares. Finite sample performance of the procedure is studied by simulations, and compared with other methods. A numerical example from a medical study is considered to illustrate the application of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary goal of a phase I trial is to find the maximally tolerated dose (MTD) of a treatment. The MTD is usually defined in terms of a tolerable probability, q*, of toxicity. Our objective is to find the highest dose with toxicity risk that does not exceed q*, a criterion that is often desired in designing phase I trials. This criterion differs from that of finding the dose with toxicity risk closest to q*, that is used in methods such as the continual reassessment method. We use the theory of decision processes to find optimal sequential designs that maximize the expected number of patients within the trial allocated to the highest dose with toxicity not exceeding q*, among the doses under consideration. The proposed method is very general in the sense that criteria other than the one considered here can be optimized and that optimal dose assignment can be defined in terms of patients within or outside the trial. It includes as an important special case the continual reassessment method. Numerical study indicates the strategy compares favourably with other phase I designs.