894 resultados para Discrete-continuous optimal control problems
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
The problem of rats in our Hawaiian sugar cane fields has been with us for a long time. Early records tell of heavy damage at various times on all the islands where sugar cane is grown. Many methods were tried to control these rats. Trapping was once used as a control measure, a bounty was used for a time, gangs of dogs were trained to catch the rats as the cane was harvested. Many kinds of baits and poisons were used. All of these methods were of some value as long as labor was cheap. Our present day problem started when the labor costs started up and the sugar industry shifted to long cropping. Until World War II cane was an annual crop. After the war it was shifted to a two year crop, three years in some places. Depending on variety, location, and soil we raise 90 to 130 tons of sugar cane per acre, which produces 7 to 15 tons of sugar per acre for a two year crop. This sugar brings about $135 dollars per ton. This tonnage of cane is a thick tangle of vegetation. The cane grows erect for almost a year, as it continues to grow it bends over at the base. This allows the stalk to rest on the ground or on other stalks of cane as it continues to grow. These stalks form a tangled mat of stalks and dead leaves that may be two feet thick at the time of harvest. At the same time the leafy growing portion of the stalk will be sticking up out of the mat of cane ten feet in the air. Some of these individual stalks may be 30 feet long and still growing at the time of harvest. All this makes it very hard to get through a cane field as it is one long, prolonged stumble over and through the cane. It is in this mat of cane that our three species of rats live. Two species are familiar to most people in the pest control field. Rattus norvegicus and Rattus rattus. In the latter species we include both the black rat and the alexandrine rats, their habits seem to be the same in Hawaii. Our third rat is the Polynesian rat, Rattus exlans, locally called the Hawaiian rat. This is a small rat, the average length head to tip of tail is nine inches and the average body weight is 65 grams. It has dark brownish fur like the alexandrine rats, and a grey belly. It is found in Indonesia, on most of the islands of Oceania and in New Zealand. All three rats live in our cane fields and the brushy and forested portions of our islands. The norway and alexandrine rats are found in and around the villages and farms, the Polynesian rat is only found in the fields and waste areas. The actual amount of damage done by rats is small, but destruction they cause is large. The rats gnaw through the rind of the cane stalk and eat the soft juicy and sweet tissues inside. They will hollow out one to several nodes per stalk attacked. The effect to the cane stalk is like ringing a tree. After this attack the stalk above the chewed portion usually dies, and sometimes the lower portion too. If the rat does not eat through the stalk the cane stalk could go on living and producing sugar at a reduced rate. Generally an injured stalk does not last long. Disease and souring organisms get in the injury and kill the stalk. And if this isn't enough, some insects are attracted to the injured stalk and will sometimes bore in and kill it. An injured stalk of cane doesn't have much of a chance. A rat may only gnaw out six inches of a 30 foot stalk and the whole stalk will die. If the rat only destroyed what he ate we could ignore them but they cause the death of too much cane. This dead, dying, and souring cane cause several direct and indirect tosses. First we lose the sugar that the cane would have produced. We harvest all of our cane mechanically so we haul the dead and souring cane to the mill where we have to grind it with our good cane and the bad cane reduces the purity of the sugar juices we squeeze from the cane. Rats reduce our income and run up our overhead.
Resumo:
Background: This pilot study aimed to verify if glycemic control can be achieved in type 2 diabetes patients after acute myocardial infarction (AMI), using insulin glargine (iGlar) associated with regular insulin (iReg), compared with the standard intensive care unit protocol, which uses continuous insulin intravenous delivery followed by NPH insulin and iReg (St. Care). Patients and Methods: Patients (n = 20) within 24 h of AMI were randomized to iGlar or St. Care. Therapy was guided exclusively by capillary blood glucose (CBG), but glucometric parameters were also analyzed by blinded continuous glucose monitoring system (CGMS). Results: Mean glycemia was 141 +/- 39 mg/dL for St. Care and 132 +/- 42 mg/dL for iGlar by CBG or 138 +/- 35 mg/dL for St. Care and 129 +/- 34 mg/dL for iGlar by CGMS. Percentage of time in range (80-180 mg/dL) by CGMS was 73 +/- 18% for iGlar and 77 +/- 11% for St. Care. No severe hypoglycemia (<= 40 mg/dL) was detected by CBG, but CGMS indicated 11 (St. Care) and seven (iGlar) excursions in four subjects from each group, mostly in sulfonylurea users (six of eight patients). Conclusions: This pilot study suggests that equivalent glycemic control without increase in severe hyperglycemia may be achieved using iGlar with background iReg. Data outputs were controlled by both CBG and CGMS measurements in a real-life setting to ensure reliability. Based on CGMS measurements, there were significant numbers of glycemic excursions outside of the target range. However, this was not detected by CBG. In addition, the data indicate that previous use of sulfonylurea may be a potential major risk factor for severe hypoglycemia irrespective of the type of insulin treatment.
Resumo:
Purpose: There is no consensus on the optimal method to measure delivered dialysis dose in patients with acute kidney injury (AKI). The use of direct dialysate-side quantification of dose in preference to the use of formal blood-based urea kinetic modeling and simplified blood urea nitrogen (BUN) methods has been recommended for dose assessment in critically-ill patients with AKI. We evaluate six different blood-side and dialysate-side methods for dose quantification. Methods: We examined data from 52 critically-ill patients with AKI requiring dialysis. All patients were treated with pre-dilution CWHDF and regional citrate anticoagulation. Delivered dose was calculated using blood-side and dialysis-side kinetics. Filter function was assessed during the entire course of therapy by calculating BUN to dialysis fluid urea nitrogen (FUN) ratios q/12 hours. Results: Median daily treatment time was 1,413 min (1,260-1,440). The median observed effluent volume per treatment was 2,355 mL/h (2,060-2,863) (p<0.001). Urea mass removal rate was 13.0 +/- 7.6 mg/min. Both EKR (r(2)=0.250; p<0.001) and K-D (r(2)=0.409; p<0.001) showed a good correlation with actual solute removal. EKR and K-D presented a decline in their values that was related to the decrease in filter function assessed by the FUN/BUN ratio. Conclusions: Effluent rate (ml/kg/h) can only empirically provide an estimated of dose in CRRT. For clinical practice, we recommend that the delivered dose should be measured and expressed as K-D. EKR also constitutes a good method for dose comparisons over time and across modalities.
Resumo:
Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.
Resumo:
Combinatorial Optimization is a branch of optimization that deals with the problems where the set of feasible solutions is discrete. Routing problem is a well studied branch of Combinatorial Optimization that concerns the process of deciding the best way of visiting the nodes (customers) in a network. Routing problems appear in many real world applications including: Transportation, Telephone or Electronic data Networks. During the years, many solution procedures have been introduced for the solution of different Routing problems. Some of them are based on exact approaches to solve the problems to optimality and some others are based on heuristic or metaheuristic search to find optimal or near optimal solutions. There is also a less studied method, which combines both heuristic and exact approaches to face different problems including those in the Combinatorial Optimization area. The aim of this dissertation is to develop some solution procedures based on the combination of heuristic and Integer Linear Programming (ILP) techniques for some important problems in Routing Optimization. In this approach, given an initial feasible solution to be possibly improved, the method follows a destruct-and-repair paradigm, where the given solution is randomly destroyed (i.e., customers are removed in a random way) and repaired by solving an ILP model, in an attempt to find a new improved solution.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
Hybrid vehicles (HV), comprising a conventional ICE-based powertrain and a secondary energy source, to be converted into mechanical power as well, represent a well-established alternative to substantially reduce both fuel consumption and tailpipe emissions of passenger cars. Several HV architectures are either being studied or already available on market, e.g. Mechanical, Electric, Hydraulic and Pneumatic Hybrid Vehicles. Among the others, Electric (HEV) and Mechanical (HSF-HV) parallel Hybrid configurations are examined throughout this Thesis. To fully exploit the HVs potential, an optimal choice of the hybrid components to be installed must be properly designed, while an effective Supervisory Control must be adopted to coordinate the way the different power sources are managed and how they interact. Real-time controllers can be derived starting from the obtained optimal benchmark results. However, the application of these powerful instruments require a simplified and yet reliable and accurate model of the hybrid vehicle system. This can be a complex task, especially when the complexity of the system grows, i.e. a HSF-HV system assessed in this Thesis. The first task of the following dissertation is to establish the optimal modeling approach for an innovative and promising mechanical hybrid vehicle architecture. It will be shown how the chosen modeling paradigm can affect the goodness and the amount of computational effort of the solution, using an optimization technique based on Dynamic Programming. The second goal concerns the control of pollutant emissions in a parallel Diesel-HEV. The emissions level obtained under real world driving conditions is substantially higher than the usual result obtained in a homologation cycle. For this reason, an on-line control strategy capable of guaranteeing the respect of the desired emissions level, while minimizing fuel consumption and avoiding excessive battery depletion is the target of the corresponding section of the Thesis.
Resumo:
Management Control System (MCS) research is undergoing turbulent times. For a long time related to cybernetic instruments of management accounting only, MCS are increasingly seen as complex systems comprising not only formal accounting-driven instruments, but also informal mechanisms of control based on organizational culture. But not only have the means of MCS changed; researchers increasingly ap-ply MCS to organizational goals other than strategy implementation.rnrnTaking the question of "How do I design a well-performing MCS?" as a starting point, this dissertation aims at providing a comprehensive and integrated overview of the "current-state" of MCS research. Opting for a definition of MCS, broad in terms of means (all formal as well as informal MCS instruments), but focused in terms of objectives (behavioral control only), the dissertation contributes to MCS theory by, a) developing an integrated (contingency) model of MCS, describing its contingencies, as well as its subcomponents, b) refining the equifinality model of Gresov/Drazin (1997), c) synthesizing research findings from contingency and configuration research concerning MCS, taking into account case studies on research topics such as ambi-dexterity, equifinality and time as a contingency.
Resumo:
AIMS/HYPOTHESIS: To assess the use of paediatric continuous subcutaneous infusion (CSII) under real-life conditions by analysing data recorded for up to 90 days and relating them to outcome. METHODS: Pump programming data from patients aged 0-18 years treated with CSII in 30 centres from 16 European countries and Israel were recorded during routine clinical visits. HbA(1c) was measured centrally. RESULTS: A total of 1,041 patients (age: 11.8 +/- 4.2 years; diabetes duration: 6.0 +/- 3.6 years; average CSII duration: 2.0 +/- 1.3 years; HbA(1c): 8.0 +/- 1.3% [means +/- SD]) participated. Glycaemic control was better in preschool (n = 142; 7.5 +/- 0.9%) and pre-adolescent (6-11 years, n = 321; 7.7 +/- 1.0%) children than in adolescent patients (12-18 years, n = 578; 8.3 +/- 1.4%). There was a significant negative correlation between HbA(1c) and daily bolus number, but not between HbA(1c) and total daily insulin dose. The use of <6.7 daily boluses was a significant predictor of an HbA(1c) level >7.5%. The incidence of severe hypoglycaemia and ketoacidosis was 6.63 and 6.26 events per 100 patient-years, respectively. CONCLUSIONS/INTERPRETATION: This large paediatric survey of CSII shows that glycaemic targets can be frequently achieved, particularly in young children, and the incidence of acute complications is low. Adequate substitution of basal and prandial insulin is associated with a better HbA(1c).