62 resultados para Discrete-continuous optimal control problems
Resumo:
This paper considers the optimal linear estimates recursion problem for discrete-time linear systems in its more general formulation. The system is allowed to be in descriptor form, rectangular, time-variant, and with the dynamical and measurement noises correlated. We propose a new expression for the filter recursive equations which presents an interesting simple and symmetric structure. Convergence of the associated Riccati recursion and stability properties of the steady-state filter are provided. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular bi-dimensional items inside a bi-dimensional container. This problem is approached with a heuristic based on Simulated Annealing (SA) with adaptive neighborhood. The objective function is evaluated in a constructive approach, where the items are placed sequentially. The placement is governed by three different types of parameters: sequence of placement, the rotation angle and the translation. The rotation applied and the translation of the polygon are cyclic continuous parameters, and the sequence of placement defines a combinatorial problem. This way, it is necessary to control cyclic continuous and discrete parameters. The approaches described in the literature deal with only type of parameter (sequence of placement or translation). In the proposed SA algorithm, the sensibility of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensibility of each parameter is associated to its probability distribution in the definition of the next candidate.
Resumo:
The computational design of a composite where the properties of its constituents change gradually within a unit cell can be successfully achieved by means of a material design method that combines topology optimization with homogenization. This is an iterative numerical method, which leads to changes in the composite material unit cell until desired properties (or performance) are obtained. Such method has been applied to several types of materials in the last few years. In this work, the objective is to extend the material design method to obtain functionally graded material architectures, i.e. materials that are graded at the local level (e.g. microstructural level). Consistent with this goal, a continuum distribution of the design variable inside the finite element domain is considered to represent a fully continuous material variation during the design process. Thus the topology optimization naturally leads to a smoothly graded material system. To illustrate the theoretical and numerical approaches, numerical examples are provided. The homogenization method is verified by considering one-dimensional material gradation profiles for which analytical solutions for the effective elastic properties are available. The verification of the homogenization method is extended to two dimensions considering a trigonometric material gradation, and a material variation with discontinuous derivatives. These are also used as benchmark examples to verify the optimization method for functionally graded material cell design. Finally the influence of material gradation on extreme materials is investigated, which includes materials with near-zero shear modulus, and materials with negative Poisson`s ratio.
Resumo:
This paper studies a simplified methodology to integrate the real time optimization (RTO) of a continuous system into the model predictive controller in the one layer strategy. The gradient of the economic objective function is included in the cost function of the controller. Optimal conditions of the process at steady state are searched through the use of a rigorous non-linear process model, while the trajectory to be followed is predicted with the use of a linear dynamic model, obtained through a plant step test. The main advantage of the proposed strategy is that the resulting control/optimization problem can still be solved with a quadratic programming routine at each sampling step. Simulation results show that the approach proposed may be comparable to the strategy that solves the full economic optimization problem inside the MPC controller where the resulting control problem becomes a non-linear programming problem with a much higher computer load. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Several MPC applications implement a control strategy in which some of the system outputs are controlled within specified ranges or zones, rather than at fixed set points [J.M. Maciejowski, Predictive Control with Constraints, Prentice Hall, New Jersey, 2002]. This means that these outputs will be treated as controlled variables only when the predicted future values lie outside the boundary of their corresponding zones. The zone control is usually implemented by selecting an appropriate weighting matrix for the output error in the control cost function. When an output prediction is inside its zone, the corresponding weight is zeroed, so that the controller ignores this output. When the output prediction lies outside the zone, the error weight is made equal to a specified value and the distance between the output prediction and the boundary of the zone is minimized. The main problem of this approach, as long as stability of the closed loop is concerned, is that each time an output is switched from the status of non-controlled to the status of controlled, or vice versa, a different linear controller is activated. Thus, throughout the continuous operation of the process, the control system keeps switching from one controller to another. Even if a stabilizing control law is developed for each of the control configurations, switching among stable controllers not necessarily produces a stable closed loop system. Here, a stable M PC is developed for the zone control of open-loop stable systems. Focusing on the practical application of the proposed controller, it is assumed that in the control structure of the process system there is an upper optimization layer that defines optimal targets to the system inputs. The performance of the proposed strategy is illustrated by simulation of a subsystem of an industrial FCC system. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Background: Despite antihypertensive therapy, it is difficult to maintain optimal systemic blood pressure (BP) values in hypertensive patients (HPT). Exercise may reduce BP in untreated HPT. However, evidence regarding its effect in long-term antihypertensive therapy is lacking. Our purpose was to evaluate the acute effects of 40-minute continuous (CE) or interval exercise (IE) using cycle ergometers on BP in long-term treated HPT. Methods: Fifty-two treated HPT were randomized to CE (n=26) or IE (n=26) protocols. CE was performed at 60% of reserve heart rate (HR). IE alternated consecutively 2 min at 50% reserve HR with 1 min at 80%. Two 24-h ambulatory BP monitoring were made after exercise (postexercise) or a nonexercise control period (control) in random order. Results: CE reduced mean 24-h systolic (S) BP (2.6 +/- 6.6 mm Hg, p-0.05) and diastolic (D) BP (2.3 +/- 4.6, p-0.01), and nighttime SBP (4.8 +/- 6.4, p < 0.001) and DBP (4.6 +/- 5.2 mm Hg, p-0.001). IE reduced 24-h SBP (2.8 +/- 6.5, p-0.03) and nighttime SBP (3.4 +/- 7.2, p-0.02), and tended to reduce nighttime DBP (p=0.06). Greater reductions occurred in higher BP levels. Percentage of normal ambulatory BP values increased after CE (24-h: 42% to 54%; daytime: 42% to 61%; nighttime: 61% to 69%) and IE (24-h: 31% to 46%; daytime: 54% to 61%; nighttime: 46% to 69%). Conclusion: CE and IE reduced ambulatory BP in treated HPT, increasing the number of patients reaching normal ambulatory BP values. These effects suggest that continuous and interval aerobic exercise may have a role in BP management in treated HPT. (c) 2008 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Attention deficit, impulsivity and hyperactivity are the cardinal features of attention deficit hyperactivity disorder (ADHD) but executive function (EF) disorders, as problems with inhibitory control, working memory and reaction time, besides others EFs, may underlie many of the disturbs associated with the disorder. OBJECTIVE: To examine the reaction time in a computerized test in children with ADHD and normal controls. METHOD: Twenty-three boys (aged 9 to 12) with ADHD diagnosis according to Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, 2000 (DSM-IV) criteria clinical, without comorbidities, Intelligence Quotient (IQ) >89, never treated with stimulant and fifteen normal controls, age matched were investigated during performance on a voluntary attention psychophysical test. RESULTS: Children with ADHD showed reaction time higher than normal controls. CONCLUSION: A slower reaction time occurred in our patients with ADHD. This findings may be related to problems with the attentional system, that could not maintain an adequate capacity of perceptual input processes and/or in motor output processes, to respond consistently during continuous or repetitive activity.
Resumo:
The main goal of this paper is to establish some equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process ( PDMP) {X( t)} and an embedded discrete-time Markov chain {Theta(n)} generated by a Markov kernel G that can be explicitly characterized in terms of the three local characteristics of the PDMP, leading to tractable criterion results. First we establish some important results characterizing {Theta(n)} as a sampling of the PDMP {X( t)} and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by G and the resolvent kernel R of the PDMP. From these results we obtain equivalence results regarding irreducibility, existence of sigma-finite invariant measures, and ( positive) recurrence and ( positive) Harris recurrence between {X( t)} and {Theta(n)}, generalizing the results of [ F. Dufour and O. L. V. Costa, SIAM J. Control Optim., 37 ( 1999), pp. 1483-1502] in several directions. Sufficient conditions in terms of a modified Foster-Lyapunov criterion are also presented to ensure positive Harris recurrence and ergodicity of the PDMP. We illustrate the use of these conditions by showing the ergodicity of a capacity expansion model.
Resumo:
Over the last couple of decades, many methods for synchronizing chaotic systems have been proposed with communications applications in view. Yet their performance has proved disappointing in face of the nonideal character of usual channels linking transmitter and receiver, that is, due to both noise and signal propagation distortion. Here we consider a discrete-time master-slave system that synchronizes despite channel bandwidth limitations and an allied communication system. Synchronization is achieved introducing a digital filter that limits the spectral content of the feedback loop responsible for producing the transmitted signal. Copyright (C) 2009 Marcio Eisencraft et al.
Resumo:
Background: HIV-1-infected individuals who spontaneously control viral replication represent an example of successful containment of the AIDS virus. Understanding the anti-viral immune responses in these individuals may help in vaccine design. However, immune responses against HIV-1 are normally analyzed using HIV-1 consensus B 15-mers that overlap by 11 amino acids. Unfortunately, this method may underestimate the real breadth of the cellular immune responses against the autologous sequence of the infecting virus. Methodology and Principal Findings: Here we compared cellular immune responses against nef and vif-encoded consensus B 15-mer peptides to responses against HLA class I-predicted minimal optimal epitopes from consensus B and autologous sequences in six patients who have controlled HIV-1 replication. Interestingly, our analysis revealed that three of our patients had broader cellular immune responses against HLA class I-predicted minimal optimal epitopes from either autologous viruses or from the HIV-1 consensus B sequence, when compared to responses against the 15-mer HIV-1 type B consensus peptides. Conclusion and Significance: This suggests that the cellular immune responses against HIV-1 in controller patients may be broader than we had previously anticipated.
Resumo:
Balance problems in hemiparetic patients after stroke can be caused by different impairments in the physiological systems involved in Postural control, including sensory afferents, movement strategies, biomechanical constraints, cognitive processing, and perception of verticality. Balance impairments and disabilities must be appropriately addressed. This article reviews the most common balance abnormalities in hemiparetic patients with stroke and the main tools used to diagnose them.
Resumo:
Background: In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method: The cost-effectiveness of the Optimal (R) and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U. S. dollars. Sensitivity analysis was performed considering key model parameters. Results: In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$ 549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion: Microscopy is more cost-effective than OptiMal (R) in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
We investigate the performance of a variant of Axelrod's model for dissemination of culture-the Adaptive Culture Heuristic (ACH)-on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents' strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N(1/4) so that the number of agents must increase with the fourth power of the problem size, N proportional to F(4), to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F(6) which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.