990 resultados para Optimal Testing
Resumo:
Purpose Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Methods Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable “air cap”. A set of output ratios (ORfclin Det ) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to ORfclin Det measured using an IBA stereotactic field diode (SFD). k fclin, f msr Qclin,Qmsr was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that k fclin, f msr Qclin,Qmsr was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which s “correction-free” in small field relative dosimetry. In addition, the feasibility of experimentally transferring k fclin, f msr Qclin,Qmsr values from the SFD to unknown diodes was tested by comparing the experimentally transferred k fclin, f msr Qclin,Qmsr values for unmodified PTWe and EDGEe diodes to Monte Carlo simulated values. Results 1.0 mm of air was required to make the PTWe diode correction-free. This modified diode (PTWeair) produced output factors equivalent to those in water at all field sizes (5–50 mm). The optimal air thickness required for the EDGEe diode was found to be 0.6 mm. The modified diode (EDGEeair) produced output factors equivalent to those in water, except at field sizes of 8 and 10 mm where it measured approximately 2% greater than the relative dose to water. The experimentally calculated k fclin, f msr Qclin,Qmsr for both the PTWe and the EDGEe diodes (without air) matched Monte Carlo simulated results, thus proving that it is feasible to transfer k fclin, f msr Qclin,Qmsr from one commercially available detector to another using experimental methods and the recommended experimental setup. Conclusions It is possible to create a diode which does not require corrections for small field output factor measurements. This has been performed and verified experimentally. The ability of a detector to be “correction-free” depends strongly on its design and composition. A nonwater-equivalent detector can only be “correction-free” if competing perturbations of the beam cancel out at all field sizes. This should not be confused with true water equivalency of a detector.
Resumo:
Background The aim of this study was to compare through surface electromyographic (sEMG) recordings of the maximum voluntary contraction (MVC) on dry land and in water by manual muscle test (MMT). Method Sixteen healthy right-handed subjects (8 males and 8 females) participated in measurement of muscle activation of the right shoulder. The selected muscles were the cervical erector spinae, trapezius, pectoralis, anterior deltoid, middle deltoid, infraspinatus and latissimus dorsi. The MVC test conditions were random with respect to the order on the land/in water. Results For each muscle, the MVC test was performed and measured through sEMG to determine differences in muscle activation in both conditions. For all muscles except the latissimus dorsi, no significant differences were observed between land and water MVC scores (p = 0.063–0.679) and precision (%Diff = 7–10%) were observed between MVC conditions in the muscles trapezius, anterior deltoid and middle deltoid. Conclusions If the procedure for data collection is optimal, under MMT conditions it appears that comparable MVC sEMG values were achieved on land and in water and the integrity of the EMG recordings were maintained during wáter immersion.
Resumo:
Yao, Begg, and Livingston (1996, Biometrics 52, 992-1001) considered the optimal group size for testing a series of potentially therapeutic agents to identify a promising one as soon as possible for given error rates. The number of patients to be tested with each agent was fixed as the group size. We consider a sequential design that allows early acceptance and rejection, and we provide an optimal strategy to minimize the sample sizes (patients) required using Markov decision processes. The minimization is under the constraints of the two types (false positive and false negative) of error probabilities, with the Lagrangian multipliers corresponding to the cost parameters for the two types of errors. Numerical studies indicate that there can be a substantial reduction in the number of patients required.
Resumo:
Several articles in this journal have studied optimal designs for testing a series of treatments to identify promising ones for further study. These designs formulate testing as an ongoing process until a promising treatment is identified. This formulation is considered to be more realistic but substantially increases the computational complexity. In this article, we show that these new designs, which control the error rates for a series of treatments, can be reformulated as conventional designs that control the error rates for each individual treatment. This reformulation leads to a more meaningful interpretation of the error rates and hence easier specification of the error rates in practice. The reformulation also allows us to use conventional designs from published tables or standard computer programs to design trials for a series of treatments. We illustrate these using a study in soft tissue sarcoma.
Resumo:
Magnetorheological dampers are intrinsically nonlinear devices, which make the modeling and design of a suitable control algorithm an interesting and challenging task. To evaluate the potential of magnetorheological (MR) dampers in control applications and to take full advantages of its unique features, a mathematical model to accurately reproduce its dynamic behavior has to be developed and then a proper control strategy has to be taken that is implementable and can fully utilize their capabilities as a semi-active control device. The present paper focuses on both the aspects. First, the paper reports the testing of a magnetorheological damper with an universal testing machine, for a set of frequency, amplitude, and current. A modified Bouc-Wen model considering the amplitude and input current dependence of the damper parameters has been proposed. It has been shown that the damper response can be satisfactorily predicted with this model. Second, a backstepping based nonlinear current monitoring of magnetorheological dampers for semi-active control of structures under earthquakes has been developed. It provides a stable nonlinear magnetorheological damper current monitoring directly based on system feedback such that current change in magnetorheological damper is gradual. Unlike other MR damper control techniques available in literature, the main advantage of the proposed technique lies in its current input prediction directly based on system feedback and smooth update of input current. Furthermore, while developing the proposed semi-active algorithm, the dynamics of the supplied and commanded current to the damper has been considered. The efficiency of the proposed technique has been shown taking a base isolated three story building under a set of seismic excitation. Comparison with widely used clipped-optimal strategy has also been shown.
Resumo:
We consider a visual search problem studied by Sripati and Olson where the objective is to identify an oddball image embedded among multiple distractor images as quickly as possible. We model this visual search task as an active sequential hypothesis testing problem (ASHT problem). Chernoff in 1959 proposed a policy in which the expected delay to decision is asymptotically optimal. The asymptotics is under vanishing error probabilities. We first prove a stronger property on the moments of the delay until a decision, under the same asymptotics. Applying the result to the visual search problem, we then propose a ``neuronal metric'' on the measured neuronal responses that captures the discriminability between images. From empirical study we obtain a remarkable correlation (r = 0.90) between the proposed neuronal metric and speed of discrimination between the images. Although this correlation is lower than with the L-1 metric used by Sripati and Olson, this metric has the advantage of being firmly grounded in formal decision theory.
Resumo:
This paper considers sequential hypothesis testing in a decentralized framework. We start with two simple decentralized sequential hypothesis testing algorithms. One of which is later proved to be asymptotically Bayes optimal. We also consider composite versions of decentralized sequential hypothesis testing. A novel nonparametric version for decentralized sequential hypothesis testing using universal source coding theory is developed. Finally we design a simple decentralized multihypothesis sequential detection algorithm.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models.
Resumo:
We describe a novel constitutive model of lung parenchyma, which can be used for continuum mechanics based predictive simulations. To develop this model, we experimentally determined the nonlinear material behavior of rat lung parenchyma. This was achieved via uni-axial tension tests on living precision-cut rat lung slices. The resulting force-displacement curves were then used as inputs for an inverse analysis. The Levenberg-Marquardt algorithm was utilized to optimize the material parameters of combinations and recombinations of established strain-energy density functions (SEFs). Comparing the best-fits of the tested SEFs we found Wpar = 4.1 kPa(I1-3)2 + 20.7 kPa(I1 - 3)3 + 4.1 kPa(-2 ln J + J2 - 1) to be the optimal constitutive model. This SEF consists of three summands: the first can be interpreted as the contribution of the elastin fibers and the ground substance, the second as the contribution of the collagen fibers while the third controls the volumetric change. The presented approach will help to model the behavior of the pulmonary parenchyma and to quantify the strains and stresses during ventilation.
Resumo:
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. © 2010 Nagengast et al.
Resumo:
1-D engine simulation models are widely used for the analysis and verification of air-path design concepts and prediction of the resulting engine transient response. The latter often requires closed loop control over the model to ensure operation within physical limits and tracking of reference signals. For this purpose, a particular implementation of Model Predictive Control (MPC) based on a corresponding Mean Value Engine Model (MVEM) is reported here. The MVEM is linearised on-line at each operating point to allow for the formulation of quadratic programming (QP) problems, which are solved as the part of the proposed MPC algorithm. The MPC output is used to control a 1-D engine model. The closed loop performance of such a system is benchmarked against the solution of a related optimal control problem (OCP). As an example this study is focused on the transient response of a light-duty car Diesel engine. For the cases examined the proposed controller implementation gives a more systematic procedure than other ad-hoc approaches that require considerable tuning effort. © 2012 IFAC.
Resumo:
In linear cascade wind tunnel tests, a high level of pitchwise periodicity is desirable to reproduce the azimuthal periodicity in the stage of an axial compressor or turbine. Transonic tests in a cascade wind tunnel with open jet boundaries have been shown to suffer from spurious waves, reflected at the jet boundary, that compromise the flow periodicity in pitch. This problem can be tackled by placing at this boundary a slotted tailboard with a specific wall void ratio s and pitch angle a. The optimal value of the s-a pair depends on the test section geometry and on the tunnel running conditions. An inviscid two-dimensional numerical method has been developed to predict transonic linear cascade flows, with and without a tailboard, and quantify the nonperiodicity in the discharge. This method includes a new computational boundary condition to model the effects of the tailboard slots on the cascade interior flow. This method has been applied to a six-blade turbine nozzle cascade, transonically tested at the University of Leicester. The numerical results identified a specific slotted tailboard geometry, able to minimize the spurious reflected waves and regain some pitchwise flow periodicity. The wind tunnel open jet test section was redesigned accordingly. Pressure measurements at the cascade outlet and synchronous spark schlieren visualization of the test section, with and without the optimized slotted tailboard, have confirmed the gain in pitchwise periodicity predicted by the numerical model. Copyright © 2006 by ASME.
Resumo:
Invasive alien species (IAS) can cause substantive ecological impacts, and the role of temperature in mediating these impacts may become increasingly significant in a changing climate. Habitat conditions and physiological optima offer predictive information for IAS impacts in novel environments. Here, using meta-analysis and laboratory experiments, we tested the hypothesis that the impacts of IAS in the field are inversely correlated with the difference in their ambient and optimal temperatures. A meta-analysis of 29 studies of consumptive impacts of IAS in inland waters revealed that the impacts of fishes and crustaceans are higher at temperatures that more closely match their thermal growth optima. In particular, the maximum impact potential was constrained by increased differences between ambient and optimal temperatures, as indicated by the steeper slope of a quantile regression on the upper 25th percentile of impact data compared to that of a weighted linear regression on all data with measured variances. We complemented this study with an experimental analysis of the functional response - the relationship between predation rate and prey supply - of two invasive predators (freshwater mysid shrimp, Hemimysis anomala and Mysis diluviana) across relevant temperature gradients; both of these species have previously been found to exert strong community-level impacts that are corroborated by their functional responses to different prey items. The functional response experiments showed that maximum feeding rates of H. anomala and M. diluviana have distinct peaks near their respective thermal optima. Although variation in impacts may be caused by numerous abiotic or biotic habitat characteristics, both our analyses point to temperature as a key mediator of IAS impact levels in inland waters and suggest that IAS management should prioritize habitats in the invaded range that more closely match the thermal optima of targeted invaders.
Resumo:
There is interest in determining levels of Mycobacterium avium subsp. paratuberculosis (MAP) contamination in milk. The optimal sample preparation for raw cows' milk to ensure accurate enumeration of viable MAP by the peptide-mediated magnetic separation (PMS)-phage assay was determined. Results indicated that milk samples should be refrigerated at 4 C after collection and MAP testing should commence within 24 h, or samples can be frozen at 70 C for up to one month without loss of MAP viability. Use of Bronopol is not advised as MAP viability is affected. The vast majority (>95%) of MAP in raw milk sedimented to the pellet upon centrifugation at 2500 g for 15 min, so this milk fraction should be tested. De-clumping of MAP cells was most effectively achieved by ultrasonication of the resuspended milk pellet on ice in a sonicator bath at 37 kHz for 4 min in ‘Pulse’ mode.