937 resultados para Distributed model predictive control
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
PURPOSE OF REVIEW: Predicting asthma episodes is notoriously difficult but has potentially significant consequences for the individual, as well as for healthcare services. The purpose of this review is to describe recent insights into the prediction of acute asthma episodes in relation to classical clinical, functional or inflammatory variables, as well as present a new concept for evaluating asthma as a dynamically regulated homeokinetic system. RECENT FINDINGS: Risk prediction for asthma episodes or relapse has been attempted using clinical scoring systems, considerations of environmental factors and lung function, as well as inflammatory and immunological markers in induced sputum or exhaled air, and these are summarized here. We have recently proposed that newer mathematical methods derived from statistical physics may be used to understand the complexity of asthma as a homeokinetic, dynamic system consisting of a network comprising multiple components, and also to assess the risk for future asthma episodes based on fluctuation analysis of long time series of lung function. SUMMARY: Apart from the classical analysis of risk factor and functional parameters, this new approach may be used to assess asthma control and treatment effects in the individual as well as in future research trials.
Resumo:
Numerous time series studies have provided strong evidence of an association between increased levels of ambient air pollution and increased levels of hospital admissions, typically at 0, 1, or 2 days after an air pollution episode. An important research aim is to extend existing statistical models so that a more detailed understanding of the time course of hospitalization after exposure to air pollution can be obtained. Information about this time course, combined with prior knowledge about biological mechanisms, could provide the basis for hypotheses concerning the mechanism by which air pollution causes disease. Previous studies have identified two important methodological questions: (1) How can we estimate the shape of the distributed lag between increased air pollution exposure and increased mortality or morbidity? and (2) How should we estimate the cumulative population health risk from short-term exposure to air pollution? Distributed lag models are appropriate tools for estimating air pollution health effects that may be spread over several days. However, estimation for distributed lag models in air pollution and health applications is hampered by the substantial noise in the data and the inherently weak signal that is the target of investigation. We introduce an hierarchical Bayesian distributed lag model that incorporates prior information about the time course of pollution effects and combines information across multiple locations. The model has a connection to penalized spline smoothing using a special type of penalty matrix. We apply the model to estimating the distributed lag between exposure to particulate matter air pollution and hospitalization for cardiovascular and respiratory disease using data from a large United States air pollution and hospitalization database of Medicare enrollees in 94 counties covering the years 1999-2002.
Resumo:
Drug-induced respiratory depression is a common side effect of the agents used in anesthesia practice to provide analgesia and sedation. Depression of the ventilatory drive in the spontaneously breathing patient can lead to severe cardiorespiratory events and it is considered a primary cause of morbidity. Reliable predictions of respiratory inhibition in the clinical setting would therefore provide a valuable means to improve the safety of drug delivery. Although multiple studies investigated the regulation of breathing in man both in the presence and absence of ventilatory depressant drugs, a unified description of respiratory pharmacodynamics is not available. This study proposes a mathematical model of human metabolism and cardiorespiratory regulation integrating several isolated physiological and pharmacological aspects of acute drug-induced ventilatory depression into a single theoretical framework. The description of respiratory regulation has a parsimonious yet comprehensive structure with substantial predictive capability. Simulations relative to the synergistic interaction of the hypercarbic and hypoxic respiratory drive and the global effect of drugs on the control of breathing are in good agreement with published experimental data. Besides providing clinically relevant predictions of respiratory depression, the model can also serve as a test bed to investigate issues of drug tolerability and dose finding/control under non-steady-state conditions.
Resumo:
In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.
Resumo:
Research on rehabilitation showed that appropriate and repetitive mechanical movements can help spinal cord injured individuals to restore their functional standing and walking. The objective of this paper was to achieve appropriate and repetitive joint movements and approximately normal gait through the PGO by replicating normal walking, and to minimize the energy consumption for both patients and the device. A model based experimental investigative approach is presented in this dissertation. First, a human model was created in Ideas and human walking was simulated in Adams. The main feature of this model was the foot ground contact model, which had distributed contact points along the foot and varied viscoelasticity. The model was validated by comparison of simulated results of normal walking and measured ones from the literature. It was used to simulate current PGO walking to investigate the real causes of poor function of the current PGO, even though it had joint movements close to normal walking. The direct cause was one leg moving at a time, which resulted in short step length and no clearance after toe off. It can not be solved by simply adding power on both hip joints. In order to find a better answer, a PGO mechanism model was used to investigate different walking mechanisms by locking or releasing some joints. A trade-off between energy consumption, control complexity and standing position was found. Finally a foot release PGO virtual model was created and simulated and only foot release mechanism was developed into a prototype. Both the release mechanism and the design of foot release were validated through the experiment by adding the foot release on the current PGO. This demonstrated an advancement in improving functional aspects of the current PGO even without a whole physical model of foot release PGO for comparison.
Resumo:
There is a need by engine manufactures for computationally efficient and accurate predictive combustion modeling tools for integration in engine simulation software for the assessment of combustion system hardware designs and early development of engine calibrations. This thesis discusses the process for the development and validation of a combustion modeling tool for Gasoline Direct Injected Spark Ignited Engine with variable valve timing, lift and duration valvetrain hardware from experimental data. Data was correlated and regressed from accepted methods for calculating the turbulent flow and flame propagation characteristics for an internal combustion engine. A non-linear regression modeling method was utilized to develop a combustion model to determine the fuel mass burn rate at multiple points during the combustion process. The computational fluid dynamic software Converge ©, was used to simulate and correlate the 3-D combustion system, port and piston geometry to the turbulent flow development within the cylinder to properly predict the experimental data turbulent flow parameters through the intake, compression and expansion processes. The engine simulation software GT-Power © is then used to determine the 1-D flow characteristics of the engine hardware being tested to correlate the regressed combustion modeling tool to experimental data to determine accuracy. The results of the combustion modeling tool show accurate trends capturing the combustion sensitivities to turbulent flow, thermodynamic and internal residual effects with changes in intake and exhaust valve timing, lift and duration.
Resumo:
BACKGROUND: Reperfusion injury is insufficiently addressed in current clinical management of acute limb ischemia. Controlled reperfusion carries an enormous clinical potential and was tested in a new reality-driven rodent model. METHODS AND RESULTS: Acute hind-limb ischemia was induced in Wistar rats and maintained for 4 hours. Unlike previous tourniquets models, femoral vessels were surgically prepared to facilitate controlled reperfusion and to prevent venous stasis. Rats were randomized into an experimental group (n=7), in which limbs were selectively perfused with a cooled isotone heparin solution at a limited flow rate before blood flow was restored, and a conventional group (n=7; uncontrolled blood reperfusion). Rats were killed 4 hours after blood reperfusion. Nonischemic limbs served as controls. Ischemia/reperfusion injury was significant in both groups; total wet-to-dry ratio was 159+/-44% of normal (P=0.016), whereas muscle viability and contraction force were reduced to 65+/-13% (P=0.016) and 45+/-34% (P=0.045), respectively. Controlled reperfusion, however, attenuated reperfusion injury significantly. Tissue edema was less pronounced (132+/-16% versus 185+/-42%; P=0.011) and muscle viability (74+/-11% versus 57+/-9%; P=0.004) and contraction force (68+/-40% versus 26+/-7%; P=0.045) were better preserved than after uncontrolled reperfusion. Moreover, subsequent blood circulation as assessed by laser Doppler recovered completely after controlled reperfusion but stayed durably impaired after uncontrolled reperfusion (P=0.027). CONCLUSIONS: Reperfusion injury was significantly alleviated by basic modifications of the initial reperfusion period in a new in vivo model of acute limb ischemia. With this model, systematic optimizations of according protocols may eventually translate into improved clinical management of acute limb ischemia.
Resumo:
This thesis will present strategies for the use of plug-in electric vehicles on smart and microgrids. MATLAB is used as the design tool for all models and simulations. First, a scenario will be explored using the dispatchable loads of electric vehicles to stabilize a microgrid with a high penetration of renewable power generation. Grid components for a microgrid with 50% photovoltaic solar production will be sized through an optimization routine to maintain storage system, load, and vehicle states over a 24-hour period. The findings of this portion are that the dispatchable loads can be used to guard against unpredictable losses in renewable generation output. Second, the use of distributed control strategies for the charging of electric vehicles utilizing an agent-based approach on a smart grid will be studied. The vehicles are regarded as additional loads to a primary forecasted load and use information transfer with the grid to make their charging decisions. Three lightweight control strategies and their effects on the power grid will be presented. The findings are that the charging behavior and peak loads on the grid can be reduced through the use of distributed control strategies.
Resumo:
The Chair of Transportation and Ware-housing at the University of Dortmund together with its industrial partner has developed and implemented a decentralized control system based on embedded technology and Internet standards. This innovative, highly flexible system uses autonomous software modules to control the flow of unit loads in real-time. The system is integrated into Chair’s test facility consisting of a wide range of conveying and sorting equipment. It is built for proof of concept purposes and will be used for further research in the fields of decentralized automation and embedded controls. This presentation describes the implementation of this decentralized control system.
Evaluation of control and surveillance strategies for classical swine fever using a simulation model
Resumo:
Classical swine fever (CSF) outbreaks can cause enormous losses in naïve pig populations. How to best minimize the economic damage and number of culled animals caused by CSF is therefore an important research area. The baseline CSF control strategy in the European Union and Switzerland consists of culling all animals in infected herds, movement restrictions for animals, material and people within a given distance to the infected herd and epidemiological tracing of transmission contacts. Additional disease control measures such as pre-emptive culling or vaccination have been recommended based on the results from several simulation models; however, these models were parameterized for areas with high animal densities. The objective of this study was to explore whether pre-emptive culling and emergency vaccination should also be recommended in low- to moderate-density areas such as Switzerland. Additionally, we studied the influence of initial outbreak conditions on outbreak severity to improve the efficiency of disease prevention and surveillance. A spatial, stochastic, individual-animal-based simulation model using all registered Swiss pig premises in 2009 (n=9770) was implemented to quantify these relationships. The model simulates within-herd and between-herd transmission (direct and indirect contacts and local area spread). By varying the four parameters (a) control measures, (b) index herd type (breeding, fattening, weaning or mixed herd), (c) detection delay for secondary cases during an outbreak and (d) contact tracing probability, 112 distinct scenarios were simulated. To assess the impact of scenarios on outbreak severity, daily transmission rates were compared between scenarios. Compared with the baseline strategy (stamping out and movement restrictions) vaccination and pre-emptive culling neither reduced outbreak size nor duration. Outbreaks starting in a herd with weaning piglets or fattening pigs caused higher losses regarding to the number of culled premises and were longer lasting than those starting in the two other index herd types. Similarly, larger transmission rates were estimated for these index herd type outbreaks. A longer detection delay resulted in more culled premises and longer duration and better transmission tracing increased the number of short outbreaks. Based on the simulation results, baseline control strategies seem sufficient to control CSF in low-medium animal-dense areas. Early detection of outbreaks is crucial and risk-based surveillance should be focused on weaning piglet and fattening pig premises.
Resumo:
A historical prospective study was designed to assess the man weight status of subjects who participated in a behavioral weight reduction program in 1983 and to determine whether there was an association between the dependent variable weight change and any of 31 independent variables after a 2 year follow-up period. Data was obtained by abstracting the subjects records and from a follow-up questionnaire administered 2 years following program participation. Five hundred nine subjects (386 females and 123 males) of 1460 subjects who participated in the program, completed and returned the questionnaire. Results showed that mean weight was significantly different (p < 0.001) between the measurement at baseline and after a 2 year follow-up period. The mean weight loss of the group was 5.8 pounds, 10.7 pounds for males and 4.2 pounds for females after a 2 year follow-up period. A total of 63.9% of the group, 69.9% of males and 61.9% of females were still below their initial weight after the 2 year follow-up period. Sixteen of the 31 variables assessed utilizing bivariate analyses were found to be significantly (p (LESSTHEQ) 0.05) associated with weight change after a 2 year follow-up period. These variables were then entered into a multivariate linear regression model. A total of 37.9% of the variance of the dependent variable, weight change, was accounted for by all 16 variables. Eight of these variables were found to be significantly (p (LESSTHEQ) 0.05) predictive of weight change in the stepwise multivariate process accounting for 37.1% of the variance. These variables included: Two baseline variables (percent over ideal body weight at enrollment and occupation) and six follow-up variables (feeling in control of eating habits, percent of body weight lost during treatment, frequency of weight measurement, physical activity, eating in response to emotions, and number of pounds of weight gain needed to resume a diet). It was concluded that a greater amount of emphasis should be placed on the six follow-up variables by clinicians involved in the treatment of obesity, and by the subjects themselves to enhance their chances of success at long-term weight loss. ^