891 resultados para optimal linear control design


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this paper is to use the framework of Lie algebroids to study optimal control problems for affine connection control systems (ACCSs) on Lie groups. In this context, the equations for critical trajectories of the problem are geometrically characterized as a Hamiltonian vector field.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main goal of this paper is to extend the generalized variational problem of Herglotz type to the more general context of the Euclidean sphere S^n. Motivated by classical results on Euclidean spaces, we derive the generalized Euler-Lagrange equation for the corresponding variational problem defined on the Riemannian manifold S^n. Moreover, the problem is formulated from an optimal control point of view and it is proved that the Euler-Lagrange equation can be obtained from the Hamiltonian equations. It is also highlighted the geodesic problem on spheres as a particular case of the generalized Herglotz problem.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract: Highway bridges have great values in a country because in case of any natural disaster they may serve as lines to save people’s lives. Being vulnerable under significant seismic loads, different methods can be considered to design resistant highway bridges and rehabilitate the existing ones. In this study, base isolation has been considered as one efficient method in this regards which in some cases reduces significantly the seismic load effects on the structure. By reducing the ductility demand on the structure without a notable increase of strength, the structure is designed to remain elastic under seismic loads. The problem associated with the isolated bridges, especially with elastomeric bearings, can be their excessive displacements under service and seismic loads. This can defy the purpose of using elastomeric bearings for small to medium span typical bridges where expansion joints and clearances may result in significant increase of initial and maintenance cost. Thus, supplementing the structure with dampers with some stiffness can serve as a solution which in turn, however, may increase the structure base shear. The main objective of this thesis is to provide a simplified method for the evaluation of optimal parameters for dampers in isolated bridges. Firstly, performing a parametric study, some directions are given for the use of simple isolation devices such as elastomeric bearings to rehabilitate existing bridges with high importance. Parameters like geometry of the bridge, code provisions and the type of soil on which the structure is constructed have been introduced to a typical two span bridge. It is concluded that the stiffness of the substructure, soil type and special provisions in the code can determine the employment of base isolation for retrofitting of bridges. Secondly, based on the elastic response coefficient of isolated bridges, a simplified design method of dampers for seismically isolated regular highway bridges has been presented in this study. By setting objectives for reduction of displacement and base shear variation, the required stiffness and damping of a hysteretic damper can be determined. By modelling a typical two span bridge, numerical analyses have followed to verify the effectiveness of the method. The method has been used to identify equivalent linear parameters and subsequently, nonlinear parameters of hysteretic damper for various designated scenarios of displacement and base shear requirements. Comparison of the results of the nonlinear numerical model without damper and with damper has shown that the method is sufficiently accurate. Finally, an innovative and simple hysteretic steel damper was designed. Five specimens were fabricated from two steel grades and were tested accompanying a real scale elastomeric isolator in the structural laboratory of the Université de Sherbrooke. The test procedure was to characterize the specimens by cyclic displacement controlled tests and subsequently to test them by real-time dynamic substructuring (RTDS) method. The test results were then used to establish a numerical model of the system which went through nonlinear time history analyses under several earthquakes. The outcome of the experimental and numerical showed an acceptable conformity with the simplified method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A fully coupled non-linear effective stress response finite difference (FD) model is built to survey the counter-intuitive recent findings on the reliance of pore water pressure ratio on foundation contact pressure. Two alternative design scenarios for a benchmark problem are explored and contrasted in the light of construction emission rates using the EFFC-DFI methodology. A strain-hardening effective stress plasticity model is adopted to simulate the dynamic loading. A combination of input motions, contact pressure, initial vertical total pressure and distance to foundation centreline are employed, as model variables, to further investigate the control of permanent and variable actions on the residual pore pressure ratio. The model is verified against the Ghosh and Madabhushi high acceleration field test database. The outputs of this work is aimed to improve the current computer-aided seismic foundation design that relies on ground’s packing state and consistency. The results confirm that on seismic excitation of shallow foundations, the likelihood of effective stress loss is greater in deeper depths and across free field. For the benchmark problem, adopting a shallow foundation system instead of piled foundation benefitted in a 75% less emission rate, a marked proportion of which is owed to reduced materials and haulage carbon cost.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the context of active control of rotating machines, standard optimal controller methods enable a trade-off to be made between (weighted) mean-square vibrations and (weighted) mean-square currents injected into magnetic bearings. One shortcoming of such controllers is that no concern is devoted to the voltages required. In practice, the voltage available imposes a strict limitation on the maximum possible rate of change of control force (force slew rate). This paper removes the aforementioned existing shortcomings of traditional optimal control.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Chapter 1: Under the average common value function, we select almost uniquely the mechanism that gives the seller the largest portion of the true value in the worst situation among all the direct mechanisms that are feasible, ex-post implementable and individually rational. Chapter 2: Strategy-proof, budget balanced, anonymous, envy-free linear mechanisms assign p identical objects to n agents. The efficiency loss is the largest ratio of surplus loss to efficient surplus, over all profiles of non-negative valuations. The smallest efficiency loss is uniquely achieved by the following simple allocation rule: assigns one object to each of the p−1 agents with the highest valuation, a large probability to the agent with the pth highest valuation, and the remaining probability to the agent with the (p+1)th highest valuation. When “envy freeness” is replaced by the weaker condition “voluntary participation”, the optimal mechanism differs only when p is much less than n. Chapter 3: One group is to be selected among a set of agents. Agents have preferences over the size of the group if they are selected; and preferences over size as well as the “stand-outside” option are single-peaked. We take a mechanism design approach and search for group selection mechanisms that are efficient, strategy-proof and individually rational. Two classes of such mechanisms are presented. The proposing mechanism allows agents to either maintain or shrink the group size following a fixed priority, and is characterized by group strategy-proofness. The voting mechanism enlarges the group size in each voting round, and achieves at least half of the maximum group size compatible with individual rationality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Coefficient diagram method is a controller design technique for linear time-invariant systems. This design procedure occurs into two different domains: an algebraic and a graphical. The former is closely paired to a conventional pole placement method and the latter consists on a diagram whose reading from the plotted curves leads to insights regarding closed-loop control system time response, stability and robustness. The controller structure has two degrees of freedom and the design process leads to both low overshoot closed-loop time response and good robustness performance regarding mismatches between the real system and the design model. This article presents an overview on this design method. In order to make more transparent the presented theoretical concepts, examples in Matlab®code are provided. The included code illustrates both the algebraic and the graphical nature of the coefficient diagram design method. © 2016, King Fahd University of Petroleum & Minerals.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Ingold port adaption of a free beam NIR spectrometer is tailored for optimal bioprocess monitoring and control. The device shows an excellent signal to noise ratio dedicated to a large free aperture and therefore a large sample volume. This can be seen particularly in the batch trajectories which show a high reproducibility. The robust and compact design withstands rough process environments as well as SIP/CIP cycles. Robust free beam NIR process analyzers are indispensable tools within the PAT/QbD framework for realtime process monitoring and control. They enable multiparametric, non-invasive measurements of analyte concentrations and process trajectories. Free beam NIR spectrometers are an ideal tool to define golden batches and process borders in the sense of QbD. Moreover, sophisticated data analysis both quantitative and MSPC yields directly to a far better process understanding. Information can be provided online in easy to interpret graphs which allow the operator to make fast and knowledge-based decisions. This finally leads to higher stability in process operation, better performance and less failed batches.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2015.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aims of this thesis were evaluation the type of wave channel, wave current, and effect of some parameters on them and identification and comparison between types of wave maker in laboratory situations. In this study, designing and making of two dimension channels (flume) and wave maker for experiment son the marine buoy, marine building and energy conversion systems were also investigated. In current research, the physical relation between pump and pumpage and the designing of current making in flume were evaluated. The related calculation for steel building, channels beside glasses and also equations of wave maker plate movement, power of motor and absorb wave(co astal slope) were calculated. In continue of this study, the servo motor was designed and applied for moving of wave maker’s plate. One Ball Screw Leaner was used for having better movement mechanisms of equipment and convert of the around movement to linear movement. The Programmable Logic Controller (PLC) was also used for control of wave maker system. The studies were explained type of ocean energies and energy conversion systems. In another part of this research, the systems of energy resistance in special way of Oscillating Water Column (OWC) were explained and one sample model was designed and applied in hydrolic channel at the Sheikh Bahaii building in Azad University, Science and Research Branch. The dimensions of designed flume was considered at 16 1.98 0. 57 m which had ability to provide regular waves as well as irregular waves with little changing on the control system. The ability of making waves was evaluated in our designed channel and the results were showed that all of the calculation in designed flume was correct. The mean of error between our results and theory calculation was conducted 7%, which was showed the well result in this situation. With evaluating of designed OWC model and considering of changes in the some part of system, one bigger sample of this model can be used for designing the energy conversion system model. The obtained results showed that the best form for chamber in exit position of system, were zero degree (0) in angle for moving below part, forty and five (45) degree in front wall of system and the moving forward of front wall keep in two times of height of wave.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Electrical neuromodulation of lumbar segments improves motor control after spinal cord injury in animal models and humans. However, the physiological principles underlying the effect of this intervention remain poorly understood, which has limited the therapeutic approach to continuous stimulation applied to restricted spinal cord locations. Here we developed stimulation protocols that reproduce the natural dynamics of motoneuron activation during locomotion. For this, we computed the spatiotemporal activation pattern of muscle synergies during locomotion in healthy rats. Computer simulations identified optimal electrode locations to target each synergy through the recruitment of proprioceptive feedback circuits. This framework steered the design of spatially selective spinal implants and real-time control software that modulate extensor and flexor synergies with precise temporal resolution. Spatiotemporal neuromodulation therapies improved gait quality, weight-bearing capacity, endurance and skilled locomotion in several rodent models of spinal cord injury. These new concepts are directly translatable to strategies to improve motor control in humans.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.