22 resultados para Optimal Stochastic Control
Resumo:
BACKGROUND Surgical site infections are the most common hospital-acquired infections among surgical patients. The administration of surgical antimicrobial prophylaxis reduces the risk of surgical site infections . The optimal timing of this procedure is still a matter of debate. While most studies suggest that it should be given as close to the incision time as possible, others conclude that this may be too late for optimal prevention of surgical site infections. A large observational study suggests that surgical antimicrobial prophylaxis should be administered 74 to 30 minutes before surgery. The aim of this article is to report the design and protocol of a randomized controlled trial investigating the optimal timing of surgical antimicrobial prophylaxis.Methods/design: In this bi-center randomized controlled trial conducted at two tertiary referral centers in Switzerland, we plan to include 5,000 patients undergoing general, oncologic, vascular and orthopedic trauma procedures. Patients are randomized in a 1:1 ratio into two groups: one receiving surgical antimicrobial prophylaxis in the anesthesia room (75 to 30 minutes before incision) and the other receiving surgical antimicrobial prophylaxis in the operating room (less than 30 minutes before incision). We expect a significantly lower rate of surgical site infections with surgical antimicrobial prophylaxis administered more than 30 minutes before the scheduled incision. The primary outcome is the occurrence of surgical site infections during a 30-day follow-up period (one year with an implant in place). When assuming a 5 surgical site infection risk with administration of surgical antimicrobial prophylaxis in the operating room, the planned sample size has an 80% power to detect a relative risk reduction for surgical site infections of 33% when administering surgical antimicrobial prophylaxis in the anesthesia room (with a two-sided type I error of 5%). We expect the study to be completed within three years. DISCUSSION The results of this randomized controlled trial will have an important impact on current international guidelines for infection control strategies in the hospital. Moreover, the results of this randomized controlled trial are of significant interest for patient safety and healthcare economics.Trial registration: This trial is registered on ClinicalTrials.gov under the identifier NCT01790529.
Resumo:
An imminent food crisis reinforces the need for novel strategies to increase crop yields worldwide. Effective control of pest insects should be part of such strategies, preferentially with reduced negative impact on the environment and optimal protection and utilization of existing biodiversity. Enhancing the presence and efficacy of native biological control agents could be one such strategy. Plant strengthener is a generic term for several commercially available compounds or mixtures of compounds that can be applied to cultivated plants in order to ‘boost their vigour, resilience and performance’. Studies into the consequences of boosting plant resistance against pests and diseases on plant volatiles have found a surprising and dramatic increase in the plants' attractiveness to parasitic wasps. Here, we summarize the results from these studies and present new results from assays that illustrate the great potential of two commercially available resistance elicitors. We argue that plant strengtheners may currently be the best option to enhance the attractiveness of cultivated plants to biological control agents. Other options, such as the genetic manipulation of the release of specific volatiles may offer future solutions, but in most systems, we still miss fundamental knowledge on which key attractants should be targeted for this approach.
Resumo:
BACKGROUND Assessment of the proportion of patients with well controlled cardiovascular risk factors underestimates the proportion of patients receiving high quality of care. Evaluating whether physicians respond appropriately to poor risk factor control gives a different picture of quality of care. We assessed physician response to control cardiovascular risk factors, as well as markers of potential overtreatment in Switzerland, a country with universal healthcare coverage but without systematic quality monitoring, annual report cards on quality of care or financial incentives to improve quality. METHODS We performed a retrospective cohort study of 1002 randomly selected patients aged 50-80 years from four university primary care settings in Switzerland. For hypertension, dyslipidemia and diabetes mellitus, we first measured proportions in control, then assessed therapy modifications among those in poor control. "Appropriate clinical action" was defined as a therapy modification or return to control without therapy modification within 12 months among patients with baseline poor control. Potential overtreatment of these conditions was defined as intensive treatment among low-risk patients with optimal target values. RESULTS 20% of patients with hypertension, 41% with dyslipidemia and 36% with diabetes mellitus were in control at baseline. When appropriate clinical action in response to poor control was integrated into measuring quality of care, 52 to 55% had appropriate quality of care. Over 12 months, therapy of 61% of patients with baseline poor control was modified for hypertension, 33% for dyslipidemia, and 85% for diabetes mellitus. Increases in number of drug classes (28-51%) and in drug doses (10-61%) were the most common therapy modifications. Patients with target organ damage and higher baseline values were more likely to have appropriate clinical action. We found low rates of potential overtreatment with 2% for hypertension, 3% for diabetes mellitus and 3-6% for dyslipidemia. CONCLUSIONS In primary care, evaluating whether physicians respond appropriately to poor risk factor control, in addition to assessing proportions in control, provide a broader view of the quality of care than relying solely on measures of proportions in control. Such measures could be more clinically relevant and acceptable to physicians than simply reporting levels of control.
Resumo:
Let Y be a stochastic process on [0,1] satisfying dY(t)=n 1/2 f(t)dt+dW(t) , where n≥1 is a given scale parameter (`sample size'), W is standard Brownian motion and f is an unknown function. Utilizing suitable multiscale tests, we construct confidence bands for f with guaranteed given coverage probability, assuming that f is isotonic or convex. These confidence bands are computationally feasible and shown to be asymptotically sharp optimal in an appropriate sense.
Resumo:
Stochastic simulation is an important and practical technique for computing probabilities of rare events, like the payoff probability of a financial option, the probability that a queue exceeds a certain level or the probability of ruin of the insurer's risk process. Rare events occur so infrequently, that they cannot be reasonably recorded during a standard simulation procedure: specifc simulation algorithms which thwart the rarity of the event to simulate are required. An important algorithm in this context is based on changing the sampling distribution and it is called importance sampling. Optimal Monte Carlo algorithms for computing rare event probabilities are either logarithmic eficient or possess bounded relative error.
Resumo:
Introduction: According to the ecological view, coordination establishes byvirtueof social context. Affordances thought of as situational opportunities to interact are assumed to represent the guiding principles underlying decisions involved in interpersonal coordination. It’s generally agreed that affordances are not an objective part of the (social) environment but that they depend on the constructive perception of involved subjects. Theory and empirical data hold that cognitive operations enabling domain-specific efficacy beliefs are involved in the perception of affordances. The aim of the present study was to test the effects of these cognitive concepts in the subjective construction of local affordances and their influence on decision making in football. Methods: 71 football players (M = 24.3 years, SD = 3.3, 21 % women) from different divisions participated in the study. Participants were presented scenarios of offensive game situations. They were asked to take the perspective of the person on the ball and to indicate where they would pass the ball from within each situation. The participants stated their decisions in two conditions with different game score (1:0 vs. 0:1). The playing fields of all scenarios were then divided into ten zones. For each zone, participants were asked to rate their confidence in being able to pass the ball there (self-efficacy), the likelihood of the group staying in ball possession if the ball were passed into the zone (group-efficacy I), the likelihood of the ball being covered safely by a team member (pass control / group-efficacy II), and whether a pass would establish a better initial position to attack the opponents’ goal (offensive convenience). Answers were reported on visual analog scales ranging from 1 to 10. Data were analyzed specifying general linear models for binomially distributed data (Mplus). Maximum likelihood with non-normality robust standard errors was chosen to estimate parameters. Results: Analyses showed that zone- and domain-specific efficacy beliefs significantly affected passing decisions. Because of collinearity with self-efficacy and group-efficacy I, group-efficacy II was excluded from the models to ease interpretation of the results. Generally, zones with high values in the subjective ratings had a higher probability to be chosen as passing destination (βself-efficacy = 0.133, p < .001, OR = 1.142; βgroup-efficacy I = 0.128, p < .001, OR = 1.137; βoffensive convenience = 0.057, p < .01, OR = 1.059). There were, however, characteristic differences in the two score conditions. While group-efficacy I was the only significant predictor in condition 1 (βgroup-efficacy I = 0.379, p < .001), only self-efficacy and offensive convenience contributed to passing decisions in condition 2 (βself-efficacy = 0.135, p < .01; βoffensive convenience = 0.120, p < .001). Discussion: The results indicate that subjectively distinct attributes projected to playfield zones affect passing decisions. The study proposes a probabilistic alternative to Lewin’s (1951) hodological and deterministic field theory and enables insight into how dimensions of the psychological landscape afford passing behavior. Being part of a team, this psychological landscape is not only constituted by probabilities that refer to the potential and consequences of individual behavior, but also to that of the group system of which individuals are part of. Hence, in regulating action decisions in group settings, informers are extended to aspects referring to the group-level. References: Lewin, K. (1951). In D. Cartwright (Ed.), Field theory in social sciences: Selected theoretical papers by Kurt Lewin. New York: Harper & Brothers.
Resumo:
Foot-and-mouth disease (FMD) is a highly contagious disease that caused several large outbreaks in Europe in the last century. The last important outbreak in Switzerland took place in 1965/66 and affected more than 900 premises and more than 50,000 animals were slaughtered. Large-scale emergency vaccination of the cattle and pig population has been applied to control the epidemic. In recent years, many studies have used infectious disease models to assess the impact of different disease control measures, including models developed for diseases exotic for the specific region of interest. Often, the absence of real outbreak data makes a validation of such models impossible. This study aimed to evaluate whether a spatial, stochastic simulation model (the Davis Animal Disease Simulation model) can predict the course of a Swiss FMD epidemic based on the available historic input data on population structure, contact rates, epidemiology of the virus, and quality of the vaccine. In addition, the potential outcome of the 1965/66 FMD epidemic without application of vaccination was investigated. Comparing the model outcomes to reality, only the largest 10% of the simulated outbreaks approximated the number of animals being culled. However, the simulation model highly overestimated the number of culled premises. While the outbreak duration could not be well reproduced by the model compared to the 1965/66 epidemic, it was able to accurately estimate the size of the area infected. Without application of vaccination, the model predicted a much higher mean number of culled animals than with vaccination, demonstrating that vaccination was likely crucial in disease control for the Swiss FMD outbreak in 1965/66. The study demonstrated the feasibility to analyze historical outbreak data with modern analytical tools. However, it also confirmed that predicted epidemics from a most carefully parameterized model cannot integrate all eventualities of a real epidemic. Therefore, decision makers need to be aware that infectious disease models are useful tools to support the decision-making process but their results are not equal valuable as real observations and should always be interpreted with caution.