260 resultados para Optimisation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a framework for joint allocation and constrained control design of flight controllers for Unmanned Aircraft Systems (UAS). The actuator configuration is used to map actuator constraint set into the space of the aircraft generalised forces. By constraining the demanded generalised forces, we ensure that the allocation problem is always feasible; and therefore, it can be solved without constraints. This leads to an allocation problem that does not require on-line numerical optimisation. Furthermore, since the controller handles the constraints, and there is no need to implement heuristics to inform the controller about actuator saturation. The latter is fundamental for avoiding Pilot Induced Oscillations (PIO) in remotely operated UAS due to the rate limit on the aircraft control surfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As Unmanned Aircraft Systems (UAS) grow in complexity, and their level of autonomy increases|moving away from the concept of a remotely piloted systems and more towards autonomous systems|there is a need to further improve reliability and tolerance to faults. The traditional way to accommodate actuator faults is by using standard control allocation techniques as part of the flight control system. The allocation problem in the presence of faults often requires adding constraints that quantify the maximum capacity of the actuators. This in turn requires on-line numerical optimisation. In this paper, we propose a framework for joint allocation and constrained control scheme via vector input scaling. The actuator configuration is used to map actuator constraints into the space of the aircraft generalised forces, which are the magnitudes demanded by the light controller. Then by constraining the output of controller, we ensure that the allocation function always receive feasible demands. With the proposed framework, the allocation problem does not require numerical optimisation, and since the controller handles the constraints, there is not need to implement heuristics to inform the controller about actuator saturation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction The dose to skin surface is an important factor for many radiotherapy treatment techniques. It is known that TPS predicted surface doses can be significantly different from actual ICRP skin doses as defined at 70 lm. A number of methods have been implemented for the accurate determination of surface dose including use of specific dosimeters such as TLDs and radiochromic film as well as Monte Carlo calculations. Stereotactic radiosurgery involves delivering very high doses per treatment fraction using small X-ray fields. To date, there has been limited data on surface doses for these very small field sizes. The purpose of this work is to evaluate surface doses by both measurements and Monte Carlo calculations for very small field sizes. Methods All measurements were performed on a Novalis Tx linear accelerator which has a 6 MV SRS X-ray beam mode which uses a specially thin flattening filter. Beam collimation was achieved by circular cones with apertures that gave field sizes ranging from 4 to 30 mm at the isocentre. The relative surface doses were measured using Gafchromic EBT3 film which has the active layer at a depth similar to the ICRP skin dose depth. Monte Carlo calculations were performed using the BEAMnrc/EGSnrc Monte Carlo codes (V4 r225). The specifications of the linear accelerator, including the collimator, were provided by the manufacturer. Optimisation of the incident X-ray beam was achieved by an iterative adjustment of the energy, spatial distribution and radial spread of the incident electron beam striking the target. The energy cutoff parameters were PCUT = 0.01 MeV and ECUT = 0.700 - MeV. Directional bremsstrahlung splitting was switched on for all BEAMnrc calculations. Relative surface doses were determined in a layer defined in a water phantom of the same thickness and depth as compared to the active later in the film. Results Measured surface doses using the EBT3 film varied between 13 and 16 % for the different cones with an uncertainty of 3 %. Monte Carlo calculated surface doses were in agreement to better than 2 % to the measured doses for all the treatment cones. Discussion and conclusions This work has shown the consistency of surface dose measurements using EBT3 film with Monte Carlo predicted values within the uncertainty of the measurements. As such, EBT3 film is recommended for in vivo surface dose measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a framework for the design of a joint motion controller and a control allocation strategy for dynamic positioning of marine vehicles. The key aspects of the proposed designs are a systematic approach to deal with actuator saturation and to inform the motion controller about saturation. The proposed system uses a mapping that translates the actuator constraint sets into constraint sets at the motion controller level. Hence, while the motion controller addresses the constraints, the control allocation algorithm can solve an unconstrained optimisation problem. The constrained control design is approached using a multivariable anti-wind-up strategy for strictly proper controllers. This is applicable to the implementation of PI and PID type of motion controllers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present fully Bayesian experimental designs for nonlinear mixed effects models, in which we develop simulation-based optimal design methods to search over both continuous and discrete design spaces. Although Bayesian inference has commonly been performed on nonlinear mixed effects models, there is a lack of research into performing Bayesian optimal design for nonlinear mixed effects models that require searches to be performed over several design variables. This is likely due to the fact that it is much more computationally intensive to perform optimal experimental design for nonlinear mixed effects models than it is to perform inference in the Bayesian framework. In this paper, the design problem is to determine the optimal number of subjects and samples per subject, as well as the (near) optimal urine sampling times for a population pharmacokinetic study in horses, so that the population pharmacokinetic parameters can be precisely estimated, subject to cost constraints. The optimal sampling strategies, in terms of the number of subjects and the number of samples per subject, were found to be substantially different between the examples considered in this work, which highlights the fact that the designs are rather problem-dependent and require optimisation using the methods presented in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computational optimisation of clinically important electrocardiogram signal features, within a single heart beat, using a Markov-chain Monte Carlo (MCMC) method is undertaken. A detailed, efficient data-driven software implementation of an MCMC algorithm has been shown. Initially software parallelisation is explored and has been shown that despite the large amount of model parameter inter-dependency that parallelisation is possible. Also, an initial reconfigurable hardware approach is explored for future applicability to real-time computation on a portable ECG device, under continuous extended use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a wide variety of drivers for business process modelling initiatives, reaching from business evolution and process optimisation over compliance checking and process certification to process enactment. That, in turn, results in models that differ in content due to serving different purposes. In particular, processes are modelled on different abstraction levels and assume different perspectives. Vertical alignment of process models aims at handling these deviations. While the advantages of such an alignment for inter-model analysis and change propagation are out of question, a number of challenges has still to be addressed. In this paper, we discuss three main challenges for vertical alignment in detail. Against this background, the potential application of techniques from the field of process integration is critically assessed. Based thereon, we identify specific research questions that guide the design of a framework for model alignment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

These lecture notes describe the use and implementation of a framework in which mathematical as well as engineering optimisation problems can be analysed. The foundations of the framework and algorithms described -Hierarchical Asynchronous Parallel Evolutionary Algorithms (HAPEAs) - lie upon traditional evolution strategies and incorporate the concepts of a multi-objective optimisation, hierarchical topology, asynchronous evaluation of candidate solutions , parallel computing and game strategies. In a step by step approach, the numerical implementation of EAs and HAPEAs for solving multi criteria optimisation problems is conducted providing the reader with the knowledge to reproduce these hand on training in his – her- academic or industrial environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis has developed an innovative technology, electrospraying, that allows biodegradable microparticles to deliver pharmaceuticals that aid bone regeneration. The establishment, characterisation and optimisation of the technique are a step forward in developing an affordable and safe alternative to the products used currently in the clinical setting for the treatment of musculoskeletal disorders. The researcher has also investigated electrospraying as a coating technique on biodegradable structures that are used to replace damaged tissues, in order to provide localised and efficient drug delivery in the site of the defect to help tissue reconstruction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimisation of Organic Rankine Cycles (ORCs) for binary cycle applications could play a major role in determining the competitiveness of low to moderate renewable sources. An important aspect of the optimisation is to maximise the turbine output power for a given resource. This requires careful attention to the turbine design notably through numerical simulations. Challenges in the numerical modelling of radial-inflow turbines using high-density working fluids still need to be addressed in order to improve the turbine design and better optimise ORCs. This paper presents preliminary 3D numerical simulations of a radial-inflow turbine working with high-density fluids in realistic geothermal ORCs. Following extensive investigation of the operating conditions and thermodynamic cycle analysis, the refrigerant R143a is chosen as the high-density working fluid. The 1D design of the candidate radial-inflow turbine is presented in details. Furthermore, commercially-available software Ansys-CFX is used to perform the 3D CFD simulations for a number of operating conditions including off-design conditions. The real-gas properties are obtained using the Peng-Robinson equations of state. The preliminary design created using dedicated radial-inflow turbine software Concepts-Rital is discussed and the 3D CFD results are presented and compared against the meanline analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bayesian experimental design is a fast growing area of research with many real-world applications. As computational power has increased over the years, so has the development of simulation-based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extensive research has highlighted the positive and exponential relationship between vehicle speed and crash risk and severity. Speed enforcement policies and practices throughout the world have developed dramatically as new technology becomes available, however speeding remains a pervasive problem internationally that significantly contributes to road trauma. This paper adopted a three-pronged approach to review speed enforcement policies and practices by: (i) describing and comparing policies and practices adopted in a cross-section of international jurisdictions; (ii) reviewing the available empirical evidence evaluating the effectiveness of various approaches, and; (iii) providing recommendations for the optimisation speed enforcement. The review shows the enforcement strategies adopted in various countries differ both in terms of the approaches used and how they are specifically applied. The literature review suggests strong and consistent evidence that police speed enforcement, in particular speed cameras, can be an effective tool for reducing vehicle speeds and subsequent traffic crashes. Drawing from this evidence, recommendations for best practice are proposed, including the specific instances in which various speed enforcement approaches typically produce the greatest road safety benefits, and perhaps most importantly, that speed enforcement programs must utilise a variety of strategies tailored to specific situations, rather than a one-size-fits-all approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work presented in this report is aimed to implement a cost-effective offline mission path planner for aerial inspection tasks of large linear infrastructures. Like most real-world optimisation problems, mission path planning involves a number of objectives which ideally should be minimised simultaneously. Understandably, the objectives of a practical optimisation problem are conflicting each other and the minimisation of one of them necessarily implies the impossibility to minimise the other ones. This leads to the need to find a set of optimal solutions for the problem; once such a set of available options is produced, the mission planning problem is reduced to a decision making problem for the mission specialists, who will choose the solution which best fit the requirements of the mission. The goal of this work is then to develop a Multi-Objective optimisation tool able to provide the mission specialists a set of optimal solutions for the inspection task amongst which the final trajectory will be chosen, given the environment data, the mission requirements and the definition of the objectives to minimise. All the possible optimal solutions of a Multi-Objective optimisation problem are said to form the Pareto-optimal front of the problem. For any of the Pareto-optimal solutions, it is impossible to improve one objective without worsening at least another one. Amongst a set of Pareto-optimal solutions, no solution is absolutely better than another and the final choice must be a trade-off of the objectives of the problem. Multi-Objective Evolutionary Algorithms (MOEAs) are recognised to be a convenient method for exploring the Pareto-optimal front of Multi-Objective optimization problems. Their efficiency is due to their parallelism architecture which allows to find several optimal solutions at each time

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim The assessment of treatment plans is an important component in the education of radiation therapists. The establishment of a grade for a plan is currently based on subjective assessment of a range of criteria. The automation of assessment could provide a number of advantages including faster feedback, reduced chance of human error, and simpler aggregation of past results. Method A collection of treatments planned by a cohort of 27 second year radiation therapy students were selected for quantitative evaluation. Treatment sites included the bladder, cervix, larynx, parotid and prostate, although only the larynx plans had been assessed in detail. The plans were designed with the Pinnacle system and exported using the DICOM framework. Assessment criteria included beam arrangement optimisation, volume contouring, target dose coverage and homogeneity, and organ-at-risk sparing. The in-house Treatment and Dose Assessor (TADA) software1 was evaluated for suitability in assisting with the quantitative assessment of these plans. Dose volume data were exported in per-student and per-structure data tables, along with beam complexity metrics, dose volume histograms, and reports on naming conventions. Results The treatment plans were exported and processed using TADA, with the processing of all 27 plans for each treatment site taking less than two minutes. Naming conventions were successfully checked against a reference protocol. Significant variations between student plans were found. Correlation with assessment feedback was established for the larynx plans. Conclusion The data generated could be used to inform the selection of future assessment criteria, monitor student development, and provide useful feedback to the students. The provision of objective, quantitative evaluations of plan quality would be a valuable addition to not only radiotherapy education programmes but also for staff development and potentially credentialing methods. New functionality within TADA developed for this work could be applied clinically to, for example, evaluate protocol compliance.