40 resultados para Optimal Linear Control
Resumo:
Certain strains of fluorescent pseudomonads are important biological components of agricultural soils that are suppressive to diseases caused by pathogenic fungi on crop plants. The biocontrol abilities of such strains depend essentially on aggressive root colonization, induction of systemic resistance in the plant, and the production of diffusible or volatile antifungal antibiotics. Evidence that these compounds are produced in situ is based on their chemical extraction from the rhizosphere and on the expression of antibiotic biosynthetic genes in the producer strains colonizing plant roots. Well-characterized antibiotics with biocontrol properties include phenazines, 2,4-diacetylphloroglucinol, pyoluteorin, pyrrolnitrin, lipopeptides, and hydrogen cyanide. In vitro, optimal production of these compounds occurs at high cell densities and during conditions of restricted growth, involving (i) a number of transcriptional regulators, which are mostly pathway-specific, and (ii) the GacS/GacA two-component system, which globally exerts a positive effect on the production of extracellular metabolites at a posttranscriptional level. Small untranslated RNAs have important roles in the GacS/GacA signal transduction pathway. One challenge in future biocontrol research involves development of new strategies to overcome the broad toxicity and lack of antifungal specificity displayed by most biocontrol antibiotics studied so far.
Resumo:
With six targeted agents approved (sorafenib, sunitinib, temsirolimus, bevacizumab [+interferon], everolimus and pazopanib), many patients with metastatic renal cell carcinoma (mRCC) will receive multiple therapies. However, the optimum sequencing approach has not been defined. A group of European experts reviewed available data and shared their clinical experience to compile an expert agreement on the sequential use of targeted agents in mRCC. To date, there are few prospective studies of sequential therapy. The mammalian target of rapamycin (mTOR) inhibitor everolimus was approved for use in patients who failed treatment with inhibitors of vascular endothelial growth factor (VEGF) and VEGF receptors (VEGFR) based on the results from a Phase III placebo-controlled study; however, until then, the only licensed agents across the spectrum of mRCC were VEGF(R) inhibitors (sorafenib, sunitinib and bevacizumab + interferon), and as such, a large body of evidence has accumulated regarding their use in sequence. Data show that sequential use of VEGF(R) inhibitors may be an effective treatment strategy to achieve prolonged clinical benefit. The optimal place of each targeted agent in the treatment sequence is still unclear, and data from large prospective studies are needed. The Phase III AXIS study of second-line sorafenib vs. axitinib (including post-VEGF(R) inhibitors) has completed, but the data are not yet published; other ongoing studies include the Phase III SWITCH study of sorafenib-sunitinib vs. sunitinib-sorafenib (NCT00732914); the Phase III 404 study of temsirolimus vs. sorafenib post-sunitinib (NCT00474786) and the Phase II RECORD 3 study of sunitinib-everolimus vs. everolimus-sunitinib (NCT00903175). Until additional data are available, consideration of patient response and tolerability to treatment may facilitate current decision-making regarding when to switch and which treatment to switch to in real-life clinical practice.
Resumo:
BACKGROUND: Up to 5% of patients presenting to the emergency department (ED) four or more times within a 12 month period represent 21% of total ED visits. In this study we sought to characterize social and medical vulnerability factors of ED frequent users (FUs) and to explore if these factors hold simultaneously. METHODS: We performed a case-control study at Lausanne University Hospital, Switzerland. Patients over 18 years presenting to the ED at least once within the study period (April 2008 toMarch 2009) were included. FUs were defined as patients with four or more ED visits within the previous 12 months. Outcome data were extracted from medical records of the first ED attendance within the study period. Outcomes included basic demographics and social variables, ED admission diagnosis, somatic and psychiatric days hospitalized over 12 months, and having a primary care physician.We calculated the percentage of FUs and non-FUs having at least one social and one medical vulnerability factor. The four chosen social factors included: unemployed and/or dependence on government welfare, institutionalized and/or without fixed residence, either separated, divorced or widowed, and under guardianship. The fourmedical vulnerability factors were: ≥6 somatic days hospitalized, ≥1 psychiatric days hospitalized, ≥5 clinical departments used (all three factors measured over 12 months), and ED admission diagnosis of alcohol and/or drug abuse. Univariate and multivariate logistical regression analyses allowed comparison of two JGIM ABSTRACTS S391 random samples of 354 FUs and 354 non-FUs (statistical power 0.9, alpha 0.05 for all outcomes except gender, country of birth, and insurance type). RESULTS: FUs accounted for 7.7% of ED patients and 24.9% of ED visits. Univariate logistic regression showed that FUs were older (mean age 49.8 vs. 45.2 yrs, p=0.003),more often separated and/or divorced (17.5%vs. 13.9%, p=0.029) or widowed (13.8% vs. 8.8%, p=0.029), and either unemployed or dependent on government welfare (31.3% vs. 13.3%, p<0.001), compared to non-FUs. FUs cumulated more days hospitalized over 12 months (mean number of somatic days per patient 1.0 vs. 0.3, p<0.001; mean number of psychiatric days per patient 0.12 vs. 0.03, p<0.001). The two groups were similar regarding gender distribution (females 51.7% vs. 48.3%). The multivariate linear regression model was based on the six most significant factors identified by univariate analysis The model showed that FUs had more social problems, as they were more likely to be institutionalized or not have a fixed residence (OR 4.62; 95% CI, 1.65 to 12.93), and to be unemployed or dependent on government welfare (OR 2.03; 95% CI, 1.31 to 3.14) compared to non-FUs. FUs were more likely to need medical care, as indicated by involvement of≥5 clinical departments over 12 months (OR 6.2; 95%CI, 3.74 to 10.15), having an ED admission diagnosis of substance abuse (OR 3.23; 95% CI, 1.23 to 8.46) and having a primary care physician (OR 1.70;95%CI, 1.13 to 2.56); however, they were less likely to present with an admission diagnosis of injury (OR 0.64; 95% CI, 0.40 to 1.00) compared to non-FUs. FUs were more likely to combine at least one social with one medical vulnerability factor (38.4% vs. 12.1%, OR 7.74; 95% CI 5.03 to 11.93). CONCLUSIONS: FUs were more likely than non-FUs to have social and medical vulnerability factors and to have multiple factors in combination.
Resumo:
BACKGROUND: Assessment of the proportion of patients with well controlled cardiovascular risk factors underestimates the proportion of patients receiving high quality of care. Evaluating whether physicians respond appropriately to poor risk factor control gives a different picture of quality of care. We assessed physician response to control cardiovascular risk factors, as well as markers of potential overtreatment in Switzerland, a country with universal healthcare coverage but without systematic quality monitoring, annual report cards on quality of care or financial incentives to improve quality. METHODS: We performed a retrospective cohort study of 1002 randomly selected patients aged 50-80 years from four university primary care settings in Switzerland. For hypertension, dyslipidemia and diabetes mellitus, we first measured proportions in control, then assessed therapy modifications among those in poor control. "Appropriate clinical action" was defined as a therapy modification or return to control without therapy modification within 12 months among patients with baseline poor control. Potential overtreatment of these conditions was defined as intensive treatment among low-risk patients with optimal target values. RESULTS: 20% of patients with hypertension, 41% with dyslipidemia and 36% with diabetes mellitus were in control at baseline. When appropriate clinical action in response to poor control was integrated into measuring quality of care, 52 to 55% had appropriate quality of care. Over 12 months, therapy of 61% of patients with baseline poor control was modified for hypertension, 33% for dyslipidemia, and 85% for diabetes mellitus. Increases in number of drug classes (28-51%) and in drug doses (10-61%) were the most common therapy modifications. Patients with target organ damage and higher baseline values were more likely to have appropriate clinical action. We found low rates of potential overtreatment with 2% for hypertension, 3% for diabetes mellitus and 3-6% for dyslipidemia. CONCLUSIONS: In primary care, evaluating whether physicians respond appropriately to poor risk factor control, in addition to assessing proportions in control, provide a broader view of the quality of care than relying solely on measures of proportions in control. Such measures could be more clinically relevant and acceptable to physicians than simply reporting levels of control.
Resumo:
Even though patients who develop ischemic stroke despite taking antiplatelet drugs represent a considerable proportion of stroke hospital admissions, there is a paucity of data from investigational studies regarding the most suitable therapeutic intervention. There have been no clinical trials to test whether increasing the dose or switching antiplatelet agents reduces the risk for subsequent events. Certain issues have to be considered in patients managed for a first or recurrent stroke while receiving antiplatelet agents. Therapeutic failure may be due to either poor adherence to treatment, associated co-morbid conditions and diminished antiplatelet effects (resistance to treatment). A diagnostic work up is warranted to identify the etiology and underlying mechanism of stroke, thereby guiding further management. Risk factors (including hypertension, dyslipidemia and diabetes) should be treated according to current guidelines. Aspirin or aspirin plus clopidogrel may be used in the acute and early phase of ischemic stroke, whereas in the long-term, antiplatelet treatment should be continued with aspirin, aspirin/extended release dipyridamole or clopidogrel monotherapy taking into account tolerance, safety, adherence and cost issues. Secondary measures to educate patients about stroke, the importance of adherence to medication, behavioral modification relating to tobacco use, physical activity, alcohol consumption and diet to control excess weight should also be implemented.
Resumo:
Drug combinations can improve angiostatic cancer treatment efficacy and enable the reduction of side effects and drug resistance. Combining drugs is non-trivial due to the high number of possibilities. We applied a feedback system control (FSC) technique with a population-based stochastic search algorithm to navigate through the large parametric space of nine angiostatic drugs at four concentrations to identify optimal low-dose drug combinations. This implied an iterative approach of in vitro testing of endothelial cell viability and algorithm-based analysis. The optimal synergistic drug combination, containing erlotinib, BEZ-235 and RAPTA-C, was reached in a small number of iterations. Final drug combinations showed enhanced endothelial cell specificity and synergistically inhibited proliferation (p < 0.001), but not migration of endothelial cells, and forced enhanced numbers of endothelial cells to undergo apoptosis (p < 0.01). Successful translation of this drug combination was achieved in two preclinical in vivo tumor models. Tumor growth was inhibited synergistically and significantly (p < 0.05 and p < 0.01, respectively) using reduced drug doses as compared to optimal single-drug concentrations. At the applied conditions, single-drug monotherapies had no or negligible activity in these models. We suggest that FSC can be used for rapid identification of effective, reduced dose, multi-drug combinations for the treatment of cancer and other diseases.
Resumo:
The fight against doping in sports has been governed since 1999 by the World Anti-Doping Agency (WADA), an independent institution behind the implementation of the World Anti-Doping Code (Code). The intent of the Code is to protect clean athletes through the harmonization of anti-doping programs at the international level with special attention to detection, deterrence and prevention of doping.1 A new version of the Code came into force on January 1st 2015, introducing, among other improvements, longer periods of sanctioning for athletes (up to four years) and measures to strengthen the role of anti-doping investigations and intelligence. To ensure optimal harmonization, five International Standards covering different technical aspects of the Code are also currently in force: the List of Prohibited Substances and Methods (List), Testing and Investigations, Laboratories, Therapeutic Use Exemptions (TUE) and Protection of Privacy and Personal Information. Adherence to these standards is mandatory for all anti-doping stakeholders to be compliant with the Code. Among these documents, the eighth version of International Standard for Laboratories (ISL), which also came into effect on January 1st 2015, includes regulations for WADA and ISO/IEC 17025 accreditations and their application for urine and blood sample analysis by anti-doping laboratories.2 Specific requirements are also described in several Technical Documents or Guidelines in which various topics are highlighted such as the identification criteria for gas chromatography (GC) and liquid chromatography (LC) coupled to mass spectrometry (MS) techniques (IDCR), measurements and reporting of endogenous androgenic anabolic agents (EAAS) and analytical requirements for the Athlete Biological Passport (ABP).
Resumo:
This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.
Resumo:
OBJECTIVE: To quantify the relation between body mass index (BMI) and endometrial cancer risk, and to describe the shape of such a relation. DESIGN: Pooled analysis of three hospital-based case-control studies. SETTING: Italy and Switzerland. POPULATION: A total of 1449 women with endometrial cancer and 3811 controls. METHODS: Multivariate odds ratios (OR) and 95% confidence intervals (95% CI) were obtained from logistic regression models. The shape of the relation was determined using a class of flexible regression models. MAIN OUTCOME MEASURE: The relation of BMI with endometrial cancer. RESULTS: Compared with women with BMI 18.5 to <25 kg/m(2) , the odds ratio was 5.73 (95% CI 4.28-7.68) for women with a BMI ≥35 kg/m(2) . The odds ratios were 1.10 (95% CI 1.09-1.12) and 1.63 (95% CI 1.52-1.75) respectively for an increment of BMI of 1 and 5 units. The relation was stronger in never-users of oral contraceptives (OR 3.35, 95% CI 2.78-4.03, for BMI ≥30 versus <25 kg/m(2) ) than in users (OR 1.22, 95% CI 0.56-2.67), and in women with diabetes (OR 8.10, 95% CI 4.10-16.01, for BMI ≥30 versus <25 kg/m(2) ) than in those without diabetes (OR 2.95, 95% CI 2.44-3.56). The relation was best fitted by a cubic model, although after the exclusion of the 5% upper and lower tails, it was best fitted by a linear model. CONCLUSIONS: The results of this study confirm a role of elevated BMI in the aetiology of endometrial cancer and suggest that the risk in obese women increases in a cubic nonlinear fashion. The relation was stronger in never-users of oral contraceptives and in women with diabetes. TWEETABLE ABSTRACT: Risk of endometrial cancer increases with elevated body weight in a cubic nonlinear fashion.
Resumo:
The purpose of this study was to estimate the energy cost of linear (EC) and vertical displacement (ECvert), mechanical efficiency and main stride parameters during simulated ski mountaineering at different speeds and gradients, to identify an optimal speed and gradient that maximizes performance. 12 subjects roller skied on a treadmill at three different inclines (10, 17 and 24 %) at three different speeds (approximately 70, 80 and 85 % of estimated peak heart rate). Energy expenditure was calculated by indirect calorimetry, while biomechanical parameters were measured with an inertial sensor-based system. At 10 % there was no significant change with speed in EC, ECvert and mechanical efficiency. At 17 and 24 % the fastest speed was significantly more economical. There was a significant effect of gradient on EC, ECvert and mechanical efficiency. The most economical gradient was the steepest one. There was a significant increase of stride frequency with speed. At steep gradients only, relative thrust phase duration decreased significantly, while stride length increased significantly with speed. There was a significant effect of gradient on stride length (decrease with steepness) and relative thrust phase duration (increase with steepness). A combination of a decreased relative thrust phase duration with increased stride length and frequency decreases ECvert. To minimize the energy expenditure to reach the top of a mountain and to optimize performance, ski-mountaineers should choose a steep gradient (~24 %) and, provided they possess sufficient metabolic scope, combine it with a fast speed (~6 km h(-1)).