844 resultados para Optimal time delay
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Recently, methods for computing D-optimal designs for population pharmacokinetic studies have become available. However there are few publications that have prospectively evaluated the benefits of D-optimality in population or single-subject settings. This study compared a population optimal design with an empirical design for estimating the base pharmacokinetic model for enoxaparin in a stratified randomized setting. The population pharmacokinetic D-optimal design for enoxaparin was estimated using the PFIM function (MATLAB version 6.0.0.88). The optimal design was based on a one-compartment model with lognormal between subject variability and proportional residual variability and consisted of a single design with three sampling windows (0-30 min, 1.5-5 hr and 11 - 12 hr post-dose) for all patients. The empirical design consisted of three sample time windows per patient from a total of nine windows that collectively represented the entire dose interval. Each patient was assigned to have one blood sample taken from three different windows. Windows for blood sampling times were also provided for the optimal design. Ninety six patients were recruited into the study who were currently receiving enoxaparin therapy. Patients were randomly assigned to either the optimal or empirical sampling design, stratified for body mass index. The exact times of blood samples and doses were recorded. Analysis was undertaken using NONMEM (version 5). The empirical design supported a one compartment linear model with additive residual error, while the optimal design supported a two compartment linear model with additive residual error as did the model derived from the full data set. A posterior predictive check was performed where the models arising from the empirical and optimal designs were used to predict into the full data set. This revealed the optimal'' design derived model was superior to the empirical design model in terms of precision and was similar to the model developed from the full dataset. This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.
Resumo:
Background/Purpose: Several pull-through procedures are available for the surgical management of Hirschsprung's disease (HD) in children. The authors have adopted a laparoscopic approach since 1995, including laparoscopic Swenson procedure (LSw), both for one-stage primary and 2-stage secondary procedures. The aim of this study was to examine the role of LSw in children with HD in both primary and secondary procedures. Methods: From January 1995 to December 2001, 42 children with biopsy-proven HD underwent laparoscopic pull-through procedure for HD. This group included 29 children who underwent LSw, a detailed analysis of which forms the basis of this report. Results: Sixteen children underwent a single-stage neonatal LSw; the median weight of this group at the time of surgery was 3.2 kg and the median age was 5 days. Secondary LSw was performed in the remaining 13 children, which included 3 children with total colonic HD who underwent laparoscopic total colectomy and LSw. The median operating time was 105 minutes (range, 66 to 175 minutes). The median time to commence full diet was 48 hours (range, 24 to 86 hours), and median time to return to normal play and activity was 72 hours (range, 48 hours to 5 days). There was no difference in operating time between primary and secondary pull-through procedures. There were no intraoperative complications, and no patient required open conversion. Postoperative ileus was noted in 3 children and enterocolitis in 2. The median hospital stay was 4 days (range, 2 to 6 days). Follow-up was between 6 months to 7 years with a median follow-up of 2.2 years. At follow-up, 2 children required laparoscopic antegrade continence enema procedure. A satisfactory continence was noted in 15 of the 19 children who were older than 3 years at the time of last follow-up. Conclusions: LSw seems to be a suitable procedure for laparoscopic management of HD in children. LSw is safe and effective, both for primary and secondary type of pull-through procedures, with good short-term results.
Resumo:
Monitoring of marine reserves has traditionally focused on the task of rejecting the null hypothesis that marine reserves have no impact on the population and community structure of harvested populations. We consider the role of monitoring of marine reserves to gain information needed for management decisions. In particular we use a decision theoretic framework to answer the question: how long should we monitor the recovery of an over-fished stock to determine the fraction of that stock to reserve? This exposes a natural tension between the cost (in terms of time and money) of additional monitoring, and the benefit of more accurately parameterizing a population model for the stock, that in turn leads to a better decision about the optimal size for the reserve with respect to harvesting. We found that the optimal monitoring time frame is rarely more than 5 years. A higher economic discount rate decreased the optimal monitoring time frame, making the expected benefit of more certainty about parameters in the system negligible compared with the expected gain from earlier exploitation.
Resumo:
In this paper we develop an evolutionary kernel-based time update algorithm to recursively estimate subset discrete lag models (including fullorder models) with a forgetting factor and a constant term, using the exactwindowed case. The algorithm applies to causality detection when the true relationship occurs with a continuous or a random delay. We then demonstrate the use of the proposed evolutionary algorithm to study the monthly mutual fund data, which come from the 'CRSP Survivor-bias free US Mutual Fund Database'. The results show that the NAV is an influential player on the international stage of global bond and stock markets.
Resumo:
Process optimisation and optimal control of batch and continuous drum granulation processes are studied in this paper. The main focus of the current research has been: (i) construction of optimisation and control relevant, population balance models through the incorporation of moisture content, drum rotation rate and bed depth into the coalescence kernels; (ii) investigation of optimal operational conditions using constrained optimisation techniques; (iii) development of optimal control algorithms based on discretized population balance equations; and (iv) comprehensive simulation studies on optimal control of both batch and continuous granulation processes. The objective of steady state optimisation is to minimise the recycle rate with minimum cost for continuous processes. It has been identified that the drum rotation-rate, bed depth (material charge), and moisture content of solids are practical decision (design) parameters for system optimisation. The objective for the optimal control of batch granulation processes is to maximize the mass of product-sized particles with minimum time and binder consumption. The objective for the optimal control of the continuous process is to drive the process from one steady state to another in a minimum time with minimum binder consumption, which is also known as the state-driving problem. It has been known for some time that the binder spray-rate is the most effective control (manipulative) variable. Although other possible manipulative variables, such as feed flow-rate and additional powder flow-rate have been investigated in the complete research project, only the single input problem with the binder spray rate as the manipulative variable is addressed in the paper to demonstrate the methodology. It can be shown from simulation results that the proposed models are suitable for control and optimisation studies, and the optimisation algorithms connected with either steady state or dynamic models are successful for the determination of optimal operational conditions and dynamic trajectories with good convergence properties. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
We prove upper and lower bounds relating the quantum gate complexity of a unitary operation, U, to the optimal control cost associated to the synthesis of U. These bounds apply for any optimal control problem, and can be used to show that the quantum gate complexity is essentially equivalent to the optimal control cost for a wide range of problems, including time-optimal control and finding minimal distances on certain Riemannian, sub-Riemannian, and Finslerian manifolds. These results generalize the results of [Nielsen, Dowling, Gu, and Doherty, Science 311, 1133 (2006)], which showed that the gate complexity can be related to distances on a Riemannian manifold.
Resumo:
The notion of being sure that you have completely eradicated an invasive species is fanciful because of imperfect detection and persistent seed banks. Eradication is commonly declared either on an ad hoc basis, on notions of seed bank longevity, or on setting arbitrary thresholds of 1% or 5% confidence that the species is not present. Rather than declaring eradication at some arbitrary level of confidence, we take an economic approach in which we stop looking when the expected costs outweigh the expected benefits. We develop theory that determines the number of years of absent surveys required to minimize the net expected cost. Given detection of a species is imperfect, the optimal stopping time is a trade-off between the cost of continued surveying and the cost of escape and damage if eradication is declared too soon. A simple rule of thumb compares well to the exact optimal solution using stochastic dynamic programming. Application of the approach to the eradication programme of Helenium amarum reveals that the actual stopping time was a precautionary one given the ranges for each parameter.
Resumo:
For leased equipment the lessor incurs penalty costs for failures occurring over the lease period and for not rectifying such failures within a specified time limit. Through preventive maintenance actions the penalty costs can be reduced but this is achieved at the expense of increased maintenance costs. The paper looks at a periodic preventive maintenance policy which achieves a tradeoff between the penalty and maintenance costs. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
For leased equipment, the lessor carries out the maintenance of the equipment. Usually, the contract of lease specifies the penalty for equipment failures and for repairs not being carried out within specified time limits. This implies that optimal preventive maintenance policies must take these penalty costs into account and properly traded against the cost of preventive maintenance actions. The costs associated with failures are high as unplanned corrective maintenance actions are costly and the resulting penalties due to lease contract terms being violated. The paper develops a model to determine the optimal parameters of a preventive maintenance policy that takes into account all these costs to minimize the total expected cost to the lessor for new item lease. The parameters of the policy are (i) the number of preventive maintenance actions to be carried out over the lease period, (ii) the time instants for such actions, and (iii) the level of action. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Pharmacodynamics (PD) is the study of the biochemical and physiological effects of drugs. The construction of optimal designs for dose-ranging trials with multiple periods is considered in this paper, where the outcome of the trial (the effect of the drug) is considered to be a binary response: the success or failure of a drug to bring about a particular change in the subject after a given amount of time. The carryover effect of each dose from one period to the next is assumed to be proportional to the direct effect. It is shown for a logistic regression model that the efficiency of optimal parallel (single-period) or crossover (two-period) design is substantially greater than a balanced design. The optimal designs are also shown to be robust to misspecification of the value of the parameters. Finally, the parallel and crossover designs are combined to provide the experimenter with greater flexibility.
Resumo:
Tissue Doppler (TD) assessment of dysynchrony (DYS) is established in evaluation for bi-ventricular pacing. Time to regional minimal volume by real-time 3D echo (3D) has been applied to DYS. 3D offers simultaneous assessment of all segments and may limit errors in localization of maximum delay due to off-axis images.We compared TD and 3D for assessment of DYS. 27 patients with ischaemic cardiomyopathy (aged 60±11 years, 85% male) underwent TD with generation of regional velocity curves. The interval between QRS onset and maximal systolic velocity (TTV) was measured in 6 basal and 6 mid-cavity segments. Onthe same day,3Dwas performed and data analysed offline with Q-Lab software (Philips, Andover, MA). Using 12 analogous regional time-volume curves time to minimal volume (T3D)was calculated. The standard deviation (S.D.) between segments in TTV and T3D was calculated as a measure ofDYS. In 7 patients itwas not possible to measureT3D due to poor images. In the remaining 20, LV diastolic volume, systolic volume and EF were 128±35 ml, 68±23 ml and 46±13%, respectively. Mean TTV was less than mean T3D (150±33ms versus 348±54 ms; p < 0.01). The intrapatient range was 20–210ms for TTV and 0–410ms for T3D. Of 9 patients (45%) with significantDYS (S.D. TTV > 32 ms), S.D. T3D was 69±37ms compared to 48±34ms in those without DYS (p = ns). In DYS patients there was concordance of the most delayed segment in 4 (44%) cases.Therefore, different techniques for assessing DYS are not directly comparable. Specific cut-offs for DYS are needed for each technique.
Resumo:
We present a framework for calculating globally optimal parameters, within a given time frame, for on-line learning in multilayer neural networks. We demonstrate the capability of this method by computing optimal learning rates in typical learning scenarios. A similar treatment allows one to determine the relevance of related training algorithms based on modifications to the basic gradient descent rule as well as to compare different training methods.
Resumo:
A method for calculating the globally optimal learning rate in on-line gradient-descent training of multilayer neural networks is presented. The method is based on a variational approach which maximizes the decrease in generalization error over a given time frame. We demonstrate the method by computing optimal learning rates in typical learning scenarios. The method can also be employed when different learning rates are allowed for different parameter vectors as well as to determine the relevance of related training algorithms based on modifications to the basic gradient descent rule.