944 resultados para Constrained Optimal Control
Resumo:
BACKGROUND: Assessment of the proportion of patients with well controlled cardiovascular risk factors underestimates the proportion of patients receiving high quality of care. Evaluating whether physicians respond appropriately to poor risk factor control gives a different picture of quality of care. We assessed physician response to control cardiovascular risk factors, as well as markers of potential overtreatment in Switzerland, a country with universal healthcare coverage but without systematic quality monitoring, annual report cards on quality of care or financial incentives to improve quality. METHODS: We performed a retrospective cohort study of 1002 randomly selected patients aged 50-80 years from four university primary care settings in Switzerland. For hypertension, dyslipidemia and diabetes mellitus, we first measured proportions in control, then assessed therapy modifications among those in poor control. "Appropriate clinical action" was defined as a therapy modification or return to control without therapy modification within 12 months among patients with baseline poor control. Potential overtreatment of these conditions was defined as intensive treatment among low-risk patients with optimal target values. RESULTS: 20% of patients with hypertension, 41% with dyslipidemia and 36% with diabetes mellitus were in control at baseline. When appropriate clinical action in response to poor control was integrated into measuring quality of care, 52 to 55% had appropriate quality of care. Over 12 months, therapy of 61% of patients with baseline poor control was modified for hypertension, 33% for dyslipidemia, and 85% for diabetes mellitus. Increases in number of drug classes (28-51%) and in drug doses (10-61%) were the most common therapy modifications. Patients with target organ damage and higher baseline values were more likely to have appropriate clinical action. We found low rates of potential overtreatment with 2% for hypertension, 3% for diabetes mellitus and 3-6% for dyslipidemia. CONCLUSIONS: In primary care, evaluating whether physicians respond appropriately to poor risk factor control, in addition to assessing proportions in control, provide a broader view of the quality of care than relying solely on measures of proportions in control. Such measures could be more clinically relevant and acceptable to physicians than simply reporting levels of control.
Resumo:
Even though patients who develop ischemic stroke despite taking antiplatelet drugs represent a considerable proportion of stroke hospital admissions, there is a paucity of data from investigational studies regarding the most suitable therapeutic intervention. There have been no clinical trials to test whether increasing the dose or switching antiplatelet agents reduces the risk for subsequent events. Certain issues have to be considered in patients managed for a first or recurrent stroke while receiving antiplatelet agents. Therapeutic failure may be due to either poor adherence to treatment, associated co-morbid conditions and diminished antiplatelet effects (resistance to treatment). A diagnostic work up is warranted to identify the etiology and underlying mechanism of stroke, thereby guiding further management. Risk factors (including hypertension, dyslipidemia and diabetes) should be treated according to current guidelines. Aspirin or aspirin plus clopidogrel may be used in the acute and early phase of ischemic stroke, whereas in the long-term, antiplatelet treatment should be continued with aspirin, aspirin/extended release dipyridamole or clopidogrel monotherapy taking into account tolerance, safety, adherence and cost issues. Secondary measures to educate patients about stroke, the importance of adherence to medication, behavioral modification relating to tobacco use, physical activity, alcohol consumption and diet to control excess weight should also be implemented.
Resumo:
The need for high performance, high precision, and energy saving in rotating machinery demands an alternative solution to traditional bearings. Because of the contactless operation principle, the rotating machines employing active magnetic bearings (AMBs) provide many advantages over the traditional ones. The advantages such as contamination-free operation, low maintenance costs, high rotational speeds, low parasitic losses, programmable stiffness and damping, and vibration insulation come at expense of high cost, and complex technical solution. All these properties make the use of AMBs appropriate primarily for specific and highly demanding applications. High performance and high precision control requires model-based control methods and accurate models of the flexible rotor. In turn, complex models lead to high-order controllers and feature considerable computational burden. Fortunately, in the last few years the advancements in signal processing devices provide new perspective on the real-time control of AMBs. The design and the real-time digital implementation of the high-order LQ controllers, which focus on fast execution times, are the subjects of this work. In particular, the control design and implementation in the field programmable gate array (FPGA) circuits are investigated. The optimal design is guided by the physical constraints of the system for selecting the optimal weighting matrices. The plant model is complemented by augmenting appropriate disturbance models. The compensation of the force-field nonlinearities is proposed for decreasing the uncertainty of the actuator. A disturbance-observer-based unbalance compensation for canceling the magnetic force vibrations or vibrations in the measured positions is presented. The theoretical studies are verified by the practical experiments utilizing a custom-built laboratory test rig. The test rig uses a prototyping control platform developed in the scope of this work. To sum up, the work makes a step in the direction of an embedded single-chip FPGA-based controller of AMBs.
Resumo:
Drug combinations can improve angiostatic cancer treatment efficacy and enable the reduction of side effects and drug resistance. Combining drugs is non-trivial due to the high number of possibilities. We applied a feedback system control (FSC) technique with a population-based stochastic search algorithm to navigate through the large parametric space of nine angiostatic drugs at four concentrations to identify optimal low-dose drug combinations. This implied an iterative approach of in vitro testing of endothelial cell viability and algorithm-based analysis. The optimal synergistic drug combination, containing erlotinib, BEZ-235 and RAPTA-C, was reached in a small number of iterations. Final drug combinations showed enhanced endothelial cell specificity and synergistically inhibited proliferation (p < 0.001), but not migration of endothelial cells, and forced enhanced numbers of endothelial cells to undergo apoptosis (p < 0.01). Successful translation of this drug combination was achieved in two preclinical in vivo tumor models. Tumor growth was inhibited synergistically and significantly (p < 0.05 and p < 0.01, respectively) using reduced drug doses as compared to optimal single-drug concentrations. At the applied conditions, single-drug monotherapies had no or negligible activity in these models. We suggest that FSC can be used for rapid identification of effective, reduced dose, multi-drug combinations for the treatment of cancer and other diseases.
Resumo:
Motivated by the Chinese experience, we analyze a semi-open economy where the central bank has access to international capital markets, but the private sector has not. This enables the central bank to choose an interest rate different from the international rate. We examine the optimal policy of the central bank by modelling it as a Ramsey planner who can choose the level of domestic public debt and of international reserves. The central bank can improve savings opportunities of credit-constrained consumers modelled as in Woodford (1990). We find that in a steady state it is optimal for the central bank to replicate the open economy, i.e., to issue debt financed by the accumulation of reserves so that the domestic interest rate equals the foreign rate. When the economy is in transition, however, a rapidly growing economy has a higher welfare without capital mobility and the optimal interest rate differs from the international rate. We argue that the domestic interest rate should be temporarily above the international rate. We also find that capital controls can still help reach the first best when the planner has more fiscal instruments.
Resumo:
Background Chronic obstructive pulmonary disease (COPD) is a heterogeneous disease whose assessment and management have traditionally been based on the severity of airflow limitation (forced expiratory volume in 1 s (FEV1)). Yet, it is now clear that FEV1 alone cannot describe the complexity of the disease. In fact, the recently released Global Initiative for Chronic Obstructive Lung Disease (GOLD), 2011 revision has proposed a new combined assessment method using three variables (symptoms, airflow limitation and exacerbations). Methods Here, we go one step further and propose that in the near future physicians will need a"control panel" for the assessment and optimal management of individual patients with complex diseases, including COPD, that provides a path towards personalised medicine. Results We propose that such a"COPD control panel" should include at least three different domains of the disease: severity, activity and impact. Each of these domains presents information on different"elements" of the disease with potential prognostic value and/or with specific therapeutic requirements. All this information can be easily incorporated into an"app" for daily use in clinical practice. Conclusion We recognise that this preliminary proposal needs debate, validation and evolution (eg, including"omics" and molecular imaging information in the future), but we hope that it may stimulate debate and research in the field.
Resumo:
The fight against doping in sports has been governed since 1999 by the World Anti-Doping Agency (WADA), an independent institution behind the implementation of the World Anti-Doping Code (Code). The intent of the Code is to protect clean athletes through the harmonization of anti-doping programs at the international level with special attention to detection, deterrence and prevention of doping.1 A new version of the Code came into force on January 1st 2015, introducing, among other improvements, longer periods of sanctioning for athletes (up to four years) and measures to strengthen the role of anti-doping investigations and intelligence. To ensure optimal harmonization, five International Standards covering different technical aspects of the Code are also currently in force: the List of Prohibited Substances and Methods (List), Testing and Investigations, Laboratories, Therapeutic Use Exemptions (TUE) and Protection of Privacy and Personal Information. Adherence to these standards is mandatory for all anti-doping stakeholders to be compliant with the Code. Among these documents, the eighth version of International Standard for Laboratories (ISL), which also came into effect on January 1st 2015, includes regulations for WADA and ISO/IEC 17025 accreditations and their application for urine and blood sample analysis by anti-doping laboratories.2 Specific requirements are also described in several Technical Documents or Guidelines in which various topics are highlighted such as the identification criteria for gas chromatography (GC) and liquid chromatography (LC) coupled to mass spectrometry (MS) techniques (IDCR), measurements and reporting of endogenous androgenic anabolic agents (EAAS) and analytical requirements for the Athlete Biological Passport (ABP).
Resumo:
The threats caused by global warming motivate different stake holders to deal with and control them. This Master's thesis focuses on analyzing carbon trade permits in optimization framework. The studied model determines optimal emission and uncertainty levels which minimize the total cost. Research questions are formulated and answered by using different optimization tools. The model is developed and calibrated by using available consistent data in the area of carbon emission technology and control. Data and some basic modeling assumptions were extracted from reports and existing literatures. The data collected from the countries in the Kyoto treaty are used to estimate the cost functions. Theory and methods of constrained optimization are briefly presented. A two-level optimization problem (individual and between the parties) is analyzed by using several optimization methods. The combined cost optimization between the parties leads into multivariate model and calls for advanced techniques. Lagrangian, Sequential Quadratic Programming and Differential Evolution (DE) algorithm are referred to. The role of inherent measurement uncertainty in the monitoring of emissions is discussed. We briefly investigate an approach where emission uncertainty would be described in stochastic framework. MATLAB software has been used to provide visualizations including the relationship between decision variables and objective function values. Interpretations in the context of carbon trading were briefly presented. Suggestions for future work are given in stochastic modeling, emission trading and coupled analysis of energy prices and carbon permits.
Resumo:
Broadcasting systems are networks where the transmission is received by several terminals. Generally broadcast receivers are passive devices in the network, meaning that they do not interact with the transmitter. Providing a certain Quality of Service (QoS) for the receivers in heterogeneous reception environment with no feedback is not an easy task. Forward error control coding can be used for protection against transmission errors to enhance the QoS for broadcast services. For good performance in terrestrial wireless networks, diversity should be utilized. The diversity is utilized by application of interleaving together with the forward error correction codes. In this dissertation the design and analysis of forward error control and control signalling for providing QoS in wireless broadcasting systems are studied. Control signaling is used in broadcasting networks to give the receiver necessary information on how to connect to the network itself and how to receive the services that are being transmitted. Usually control signalling is considered to be transmitted through a dedicated path in the systems. Therefore, the relationship of the signaling and service data paths should be considered early in the design phase. Modeling and simulations are used in the case studies of this dissertation to study this relationship. This dissertation begins with a survey on the broadcasting environment and mechanisms for providing QoS therein. Then case studies present analysis and design of such mechanisms in real systems. The mechanisms for providing QoS considering signaling and service data paths and their relationship at the DVB-H link layer are analyzed as the first case study. In particular the performance of different service data decoding mechanisms and optimal signaling transmission parameter selection are presented. The second case study investigates the design of signaling and service data paths for the more modern DVB-T2 physical layer. Furthermore, by comparing the performances of the signaling and service data paths by simulations, configuration guidelines for the DVB-T2 physical layer signaling are given. The presented guidelines can prove useful when configuring DVB-T2 transmission networks. Finally, recommendations for the design of data and signalling paths are given based on findings from the case studies. The requirements for the signaling design should be derived from the requirements for the main services. Generally, these requirements for signaling should be more demanding as the signaling is the enabler for service reception.
Resumo:
The energy reform, which is happening all over the world, is caused by the common concern of the future of the humankind in our shared planet. In order to keep the effects of the global warming inside of a certain limit, the use of fossil fuels must be reduced. The marginal costs of the renewable sources, RES are quite high, since they are new technology. In order to induce the implementation of RES to the power grid and lower the marginal costs, subsidies were developed in order to make the use of RES more profitable. From the RES perspective the current market is developed to favor conventional generation, which mainly uses fossil fuels. Intermittent generation, like wind power, is penalized in the electricity market since it is intermittent and thus diffi-cult to control. Therefore, the need of regulation and thus the regulation costs to the producer differ, depending on what kind of generation market participant owns. In this thesis it is studied if there is a way for market participant, who has wind power to use the special characteristics of electricity market Nord Pool and thus reach the gap between conventional generation and the intermittent generation only by placing bids to the market. Thus, an optimal bid is introduced, which purpose is to minimize the regulation costs and thus lower the marginal costs of wind power. In order to make real life simulations in Nord Pool, a wind power forecast model was created. The simulations were done in years 2009 and 2010 by using a real wind power data provided by Hyötytuuli, market data from Nord Pool and wind forecast data provided by Finnish Meteorological Institute. The optimal bid needs probability intervals and therefore the methodology to create probability distributions is introduced in this thesis. In the end of the thesis it is shown that the optimal bidding improves the position of wind power producer in the electricity market.
Resumo:
The assembly and maintenance of the International Thermonuclear Experimental Reactor (ITER) vacuum vessel (VV) is highly challenging since the tasks performed by the robot involve welding, material handling, and machine cutting from inside the VV. The VV is made of stainless steel, which has poor machinability and tends to work harden very rapidly, and all the machining operations need to be carried out from inside of the ITER VV. A general industrial robot cannot be used due to its poor stiffness in the heavy duty machining process, and this will cause many problems, such as poor surface quality, tool damage, low accuracy. Therefore, one of the most suitable options should be a light weight mobile robot which is able to move around inside of the VV and perform different machining tasks by replacing different cutting tools. Reducing the mass of the robot manipulators offers many advantages: reduced material costs, reduced power consumption, the possibility of using smaller actuators, and a higher payload-to-robot weight ratio. Offsetting these advantages, the lighter weight robot is more flexible, which makes it more difficult to control. To achieve good machining surface quality, the tracking of the end effector must be accurate, and an accurate model for a more flexible robot must be constructed. This thesis studies the dynamics and control of a 10 degree-of-freedom (DOF) redundant hybrid robot (4-DOF serial mechanism and 6-DOF 6-UPS hexapod parallel mechanisms) hydraulically driven with flexible rods under the influence of machining forces. Firstly, the flexibility of the bodies is described using the floating frame of reference method (FFRF). A finite element model (FEM) provided the Craig-Bampton (CB) modes needed for the FFRF. A dynamic model of the system of six closed loop mechanisms was assembled using the constrained Lagrange equations and the Lagrange multiplier method. Subsequently, the reaction forces between the parallel and serial parts were used to study the dynamics of the serial robot. A PID control based on position predictions was implemented independently to control the hydraulic cylinders of the robot. Secondly, in machining, to achieve greater end effector trajectory tracking accuracy for surface quality, a robust control of the actuators for the flexible link has to be deduced. This thesis investigates the intelligent control of a hydraulically driven parallel robot part based on the dynamic model and two schemes of intelligent control for a hydraulically driven parallel mechanism based on the dynamic model: (1) a fuzzy-PID self-tuning controller composed of the conventional PID control and with fuzzy logic, and (2) adaptive neuro-fuzzy inference system-PID (ANFIS-PID) self-tuning of the gains of the PID controller, which are implemented independently to control each hydraulic cylinder of the parallel mechanism based on rod length predictions. The serial component of the hybrid robot can be analyzed using the equilibrium of reaction forces at the universal joint connections of the hexa-element. To achieve precise positional control of the end effector for maximum precision machining, the hydraulic cylinder should be controlled to hold the hexa-element. Thirdly, a finite element approach of multibody systems using the Special Euclidean group SE(3) framework is presented for a parallel mechanism with flexible piston rods under the influence of machining forces. The flexibility of the bodies is described using the nonlinear interpolation method with an exponential map. The equations of motion take the form of a differential algebraic equation on a Lie group, which is solved using a Lie group time integration scheme. The method relies on the local description of motions, so that it provides a singularity-free formulation, and no parameterization of the nodal variables needs to be introduced. The flexible slider constraint is formulated using a Lie group and used for modeling a flexible rod sliding inside a cylinder. The dynamic model of the system of six closed loop mechanisms was assembled using Hamilton’s principle and the Lagrange multiplier method. A linearized hydraulic control system based on rod length predictions was implemented independently to control the hydraulic cylinders. Consequently, the results of the simulations demonstrating the behavior of the robot machine are presented for each case study. In conclusion, this thesis studies the dynamic analysis of a special hybrid (serialparallel) robot for the above-mentioned special task involving the ITER and investigates different control algorithms that can significantly improve machining performance. These analyses and results provide valuable insight into the design and control of the parallel robot with flexible rods.
Resumo:
The objectives of this study were to evaluate baby corn yield, green corn yield, and grain yield in corn cultivar BM 3061, with weed control achieved via a combination of hoeing and intercropping with gliricidia, and determine how sample size influences weed growth evaluation accuracy. A randomized block design with ten replicates was used. The cultivar was submitted to the following treatments: A = hoeings at 20 and 40 days after corn sowing (DACS), B = hoeing at 20 DACS + gliricidia sowing after hoeing, C = gliricidia sowing together with corn sowing + hoeing at 40 DACS, D = gliricidia sowing together with corn sowing, and E = no hoeing. Gliricidia was sown at a density of 30 viable seeds m-2. After harvesting the mature ears, the area of each plot was divided into eight sampling units measuring 1.2 m² each to evaluate weed growth (above-ground dry biomass). Treatment A provided the highest baby corn, green corn, and grain yields. Treatment B did not differ from treatment A with respect to the yield values for the three products, and was equivalent to treatment C for green corn yield, but was superior to C with regard to baby corn weight and grain yield. Treatments D and E provided similar yields and were inferior to the other treatments. Therefore, treatment B is a promising one. The relation between coefficient of experimental variation (CV) and sample size (S) to evaluate growth of the above-ground part of the weeds was given by the equation CV = 37.57 S-0.15, i.e., CV decreased as S increased. The optimal sample size indicated by this equation was 4.3 m².
Resumo:
This thesis investigated the modulation of dynamic contractile function and energetics of work by posttetanic potentiation (PTP). Mechanical experiments were conducted in vitro using software-controlled protocols to stimulate/determine contractile function during ramp shortening, and muscles were frozen during parallel incubations for biochemical analysis. The central feature of this research was the comparison of fast hindlimb muscles from wildtype and skeletal myosin light chain kinase knockout (skMLCK-/-) mice that does not express the primary mechanism for PTP: myosin regulatory light chain (RLC) phosphorylation. In contrast to smooth/cardiac muscles where RLC phosphorylation is indispensable, its precise physiological role in skeletal muscle is unclear. It was initially determined that tetanic potentiation was shortening speed dependent, and this sensitivity of the PTP mechanism to muscle shortening extended the stimulation frequency domain over which PTP was manifest. Thus, the physiological utility of RLC phosphorylation to augment contractile function in vivo may be more extensive than previously considered. Subsequent experiments studied the contraction-type dependence for PTP and demonstrated that the enhancement of contractile function was dependent on force level. Surprisingly, in the absence of RLC phosphorylation, skMLCK-/- muscles exhibited significant concentric PTP; consequently, up to ~50% of the dynamic PTP response in wildtype muscle may be attributed to an alternate mechanism. When the interaction of PTP and the catchlike property (CLP) was examined, we determined that unlike the acute augmentation of peak force by the CLP, RLC phosphorylation produced a longer-lasting enhancement of force and work in the potentiated state. Nevertheless, despite the apparent interference between these mechanisms, both offer physiological utility and may be complementary in achieving optimal contractile function in vivo. Finally, when the energetic implications of PTP were explored, we determined that during a brief period of repetitive concentric activation, total work performed was ~60% greater in wildtype vs. skMLCK-/- muscles but there was no genotype difference in High-Energy Phosphate Consumption or Economy (i.e. HEPC: work). In summary, this thesis provides novel insight into the modulatory effects of PTP and RLC phosphorylation, and through the observation of alternative mechanisms for PTP we further develop our understanding of the history-dependence of fast skeletal muscle function.
Resumo:
The purpose of this paper is to characterize the optimal time paths of production and water usage by an agricultural and an oil sector that have to share a limited water resource. We show that for any given water stock, if the oil stock is sufficiently large, it will become optimal to have a phase during which the agricultural sector is inactive. This may mean having an initial phase during which the two sectors are active, then a phase during which the water is reserved for the oil sector and the agricultural sector is inactive, followed by a phase during which both sectors are active again. The agricultural sector will always be active in the end as the oil stock is depleted and the demand for water from the oil sector decreases. In the case where agriculture is not constrained by the given natural inflow of water once there is no more oil, we show that oil extraction will always end with a phase during which oil production follows a pure Hotelling path, with the implicit price of oil net of extraction cost growing at the rate of interest. If the natural inflow of water does constitute a constraint for agriculture, then oil production never follows a pure Hotelling path, because its full marginal cost must always reflect not only the imputed rent on the finite oil stock, but also the positive opportunity cost of water.
Resumo:
Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps. Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac. Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations. Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps. Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins