941 resultados para Biodosimetry errors


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Yao, Begg, and Livingston (1996, Biometrics 52, 992-1001) considered the optimal group size for testing a series of potentially therapeutic agents to identify a promising one as soon as possible for given error rates. The number of patients to be tested with each agent was fixed as the group size. We consider a sequential design that allows early acceptance and rejection, and we provide an optimal strategy to minimize the sample sizes (patients) required using Markov decision processes. The minimization is under the constraints of the two types (false positive and false negative) of error probabilities, with the Lagrangian multipliers corresponding to the cost parameters for the two types of errors. Numerical studies indicate that there can be a substantial reduction in the number of patients required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adaptions of weighted rank regression to the accelerated failure time model for censored survival data have been successful in yielding asymptotically normal estimates and flexible weighting schemes to increase statistical efficiencies. However, for only one simple weighting scheme, Gehan or Wilcoxon weights, are estimating equations guaranteed to be monotone in parameter components, and even in this case are step functions, requiring the equivalent of linear programming for computation. The lack of smoothness makes standard error or covariance matrix estimation even more difficult. An induced smoothing technique overcame these difficulties in various problems involving monotone but pure jump estimating equations, including conventional rank regression. The present paper applies induced smoothing to the Gehan-Wilcoxon weighted rank regression for the accelerated failure time model, for the more difficult case of survival time data subject to censoring, where the inapplicability of permutation arguments necessitates a new method of estimating null variance of estimating functions. Smooth monotone parameter estimation and rapid, reliable standard error or covariance matrix estimation is obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The method of generalized estimating equation-, (GEEs) has been criticized recently for a failure to protect against misspecification of working correlation models, which in some cases leads to loss of efficiency or infeasibility of solutions. However, the feasibility and efficiency of GEE methods can be enhanced considerably by using flexible families of working correlation models. We propose two ways of constructing unbiased estimating equations from general correlation models for irregularly timed repeated measures to supplement and enhance GEE. The supplementary estimating equations are obtained by differentiation of the Cholesky decomposition of the working correlation, or as score equations for decoupled Gaussian pseudolikelihood. The estimating equations are solved with computational effort equivalent to that required for a first-order GEE. Full details and analytic expressions are developed for a generalized Markovian model that was evaluated through simulation. Large-sample ".sandwich" standard errors for working correlation parameter estimates are derived and shown to have good performance. The proposed estimating functions are further illustrated in an analysis of repeated measures of pulmonary function in children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summary. Interim analysis is important in a large clinical trial for ethical and cost considerations. Sometimes, an interim analysis needs to be performed at an earlier than planned time point. In that case, methods using stochastic curtailment are useful in examining the data for early stopping while controlling the inflation of type I and type II errors. We consider a three-arm randomized study of treatments to reduce perioperative blood loss following major surgery. Owing to slow accrual, an unplanned interim analysis was required by the study team to determine whether the study should be continued. We distinguish two different cases: when all treatments are under direct comparison and when one of the treatments is a control. We used simulations to study the operating characteristics of five different stochastic curtailment methods. We also considered the influence of timing of the interim analyses on the type I error and power of the test. We found that the type I error and power between the different methods can be quite different. The analysis for the perioperative blood loss trial was carried out at approximately a quarter of the planned sample size. We found that there is little evidence that the active treatments are better than a placebo and recommended closure of the trial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The non-linear equations of motion of a rotating blade undergoing extensional and flapwise bending vibration are derived, including non-linearities up to O (ε3). The strain-displacement relationship derived is compared with expressions derived by earlier investigators and the errors and the approximations made in some of those are brought out. The equations of motion are solved under the inextensionality condition to obtain the influence of the amplitude on the fundamental flapwise natural frequency of the rotating blade. It is found that large finite amplitudes have a softening effect on the flapwise frequency and that this influence becomes stronger at higher speeds of rotation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Error estimates for the error reproducing kernel method (ERKM) are provided. The ERKM is a mesh-free functional approximation scheme [A. Shaw, D. Roy, A NURBS-based error reproducing kernel method with applications in solid mechanics, Computational Mechanics (2006), to appear (available online)], wherein a targeted function and its derivatives are first approximated via non-uniform rational B-splines (NURBS) basis function. Errors in the NURBS approximation are then reproduced via a family of non-NURBS basis functions, constructed using a polynomial reproduction condition, and added to the NURBS approximation of the function obtained in the first step. In addition to the derivation of error estimates, convergence studies are undertaken for a couple of test boundary value problems with known exact solutions. The ERKM is next applied to a one-dimensional Burgers equation where, time evolution leads to a breakdown of the continuous solution and the appearance of a shock. Many available mesh-free schemes appear to be unable to capture this shock without numerical instability. However, given that any desired order of continuity is achievable through NURBS approximations, the ERKM can even accurately approximate functions with discontinuous derivatives. Moreover, due to the variation diminishing property of NURBS, it has advantages in representing sharp changes in gradients. This paper is focused on demonstrating this ability of ERKM via some numerical examples. Comparisons of some of the results with those via the standard form of the reproducing kernel particle method (RKPM) demonstrate the relative numerical advantages and accuracy of the ERKM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Color displays used in image processing systems consist of a refresh memory buffer storing digital image data which are converted into analog signals to display an image by driving the primary color channels (red, green, and blue) of a color television monitor. The color cathode ray tube (CRT) of the monitor is unable to reproduce colors exactly due to phosphor limitations, exponential luminance response of the tube to the applied signal, and limitations imposed by the digital-to-analog conversion. In this paper we describe some computer simulation studies (using the U*V*W* color space) carried out to measure these reproduction errors. Further, a procedure to correct for color reproduction error due to the exponential luminance response (gamma) of the picture tube is proposed, using a video-lookup-table and a higher resolution digital-to-analog converter. It is found, on the basis of computer simulation studies, that the proposed gamma correction scheme is effective and robust with respect to variations in the assumed value of the gamma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The fatty acid composition of ground nuts (Arachis hypogaea L.) commonly known as peanuts, is an important consideration when a new variety is being released. The composition impacts on nutrition and, importantly, self-life of peanut products. To select for suitable breeding material, it was necessary to develop a rapid, non-derstructive and cost-efficient method. Near infrared spectroscopy was chosen as that methodology. Calibrations were developed for two major fatty-acid components, oleic and linoleic acids and two minor components, palmitic and stearic acids, as well as total oil content. Partial least squares models indicated a high level of precision with a squared multiple correlation coefficient of greater than 0.90 for each constitutent. Standard errors for prediction for oleic, linoleic, palmitic, stearic acids and total oil content were 6.4%, 4.5%, 0.8%, 0.9% and 1.3% respectively. The results demonstrated that reasonable calibrations could be developed to predict oil composition and content of peanuts for a breeding programme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Suboptimal restraint use, particularly the incorrect use of restraints, is a significant and widespread problem among child vehicle occupants, and increases the risk of injury. Previous research has identified comfort as a potential factor influencing suboptimal restraint use. Both the real comfort experienced by the child and the parent’s perception of the child’s comfort are reported to influence the optimal use of restraints. Problems with real comfort may lead the child to misuse the restraint in their attempt to achieve better comfort whilst parent-perceived discomfort has been reported as a driver for premature graduation and inappropriate restraint choice. However, this work has largely been qualitative. There has been no research that objectively studies either the association between real and parent-perceived comfort, or any association between comfort and suboptimal restraint use. One barrier to such studies is the absence of validated tools for quantifying real comfort in children. We aimed to develop methods to examine both real and parent-perceived comfort and examine their effects on suboptimal restraint use. We conducted online parent surveys (n=470) to explore what drives parental perceptions of their child’s comfort in restraint systems (study 1) and used data from field observation studies (n=497) to examine parent-perceived comfort and its relationship with observed restraint use (study 2). We developed methods to measure comfort in children in a laboratory setting (n=14) using video analysis to estimate a Discomfort Avoidance Behaviour (DAB) score, pressure mapping and adapted survey tools to differentiate between comfortable and induced discomfort conditions (study 3). The DAB rate was then used to compare an integrated booster with an add-on booster (study 4) Preliminary analysis of our recent online survey of Australian parents (study 1) indicates that 23% of parents report comfort as a consideration when making a decision to change restraints. Logistic regression modelling of data collected during the field observation study (study 2) revealed that parent-perceived discomfort was not significantly associated with premature graduation. Contrary to expectation, children of parents who reported that their child was comfortable were almost twice as likely to have been incorrectly restrained (p<0.01, 95% CI 1.24 - 2.77).In the laboratory study (study 3) we found our adapted survey tools did not provide a reliable measurement of real comfort among children. However our DAB score was able to differentiate between comfortable and induced discomfort conditions and correlated well with pressure mapping. Preliminary results from the laboratory comparison study (study 4) indicate a positive correlation between DAB rate and use errors. In experiments conducted to date, we have seen a significantly higher DAB rate in the integrated booster compared to the add-on booster (p < 0.01). However, this needs to be confirmed in a naturalistic setting and in further experiments that take length of time under observation into account. Our results suggest that while some parents report concern about their child’s comfort, parent-reported comfort levels were not associated with restraint choice. If comfort is important for optimal restraint use, it is likely to be the real comfort of the child rather than that reported by the parent. The method we have developed for studying real comfort can be used in naturalistic studies involving child occupants to further understand this relationship. This work will be of interest to vehicle and child restraint manufacturers interested in improving restraint design for young occupants as well as researchers and other stakeholders interested in reducing the incidence of restraint misuse among children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Prescribing is a complex task, requiring specific knowledge and skills, and the execution of effective, context-specific clinical reasoning. Systematic reviews indicate medical prescribing errors have a median rate of 7% [IQR 2%-14%] of medication orders [1-3]. For podiatrists pursuing prescribing rights, a clear need exists to ensure practitioners develop a well-defined set of prescribing skills, which will contribute to competent, safe and appropriate practice. Aim To investigate the methods employed to teach and assess the principles of effective prescribing in the undergraduate podiatry program and compare and contrast these findings with four other non-medical professions who undertake prescribing after training at Queensland University of Technology. Method The NPS National Prescribing Competency Standards were employed as the prescribing standard. A curriculum mapping exercise was undertaken to determine whether the prescribing principles articulated in the competency standards were addressed by each profession. Results A range of methods are currently utilised to teach prescribing across disciplines. Application of prescribing competencies to the context of each profession appears to influence the teaching methods used. Most competencies were taught using a multimodal format, including interactive lectures, self-directed learning, tutorial sessions and clinical placement. In particular clinical training was identified as the most consistent form of educating safe prescribers across all five disciplines. Assessment of prescribing competency utilised multiple techniques including written and oral examinations and research tasks, case studies, objective structured clinical examination exercises and the assessment of clinical practice. Effective and reliable assessment of prescribing undertaken by students in diverse settings remains challenging e.g. that occurring in the clinical practice environment. Conclusion Recommendations were made to refine curricula and to promote efficient cross-discipline teaching by staff from the disciplines of podiatry, pharmacy, nurse practitioner, optometry and paramedic science. Students now experience a sophisticated level of multidisciplinary learning in the clinical setting which integrates the expertise and skills of experience prescribers combined with innovative information technology platforms (CCTV and live patient assessments). Further work is required to establish a practical, effective approach to the assessment of prescribing competence especially between the university and clinical settings.