914 resultados para optimisation combinatoire


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we are concerned with the practical implementation of time optimal numerical techniques on underwater vehicles. We briefly introduce the model of underwater vehicle we consider and present the parameters for the test bed ODIN (Omni-Directional Intelligent Navigator). Then we explain the numerical method used to obtain time optimal trajectories with a structure suitable for the implementation. We follow this with a discussion on the modifications to be made considering the characteristics of ODIN. Finally, we illustrate our computations with some experimental results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydraulic excavators in the mining industry are widely used owing to the large payload capabilities these machines can achieve. However, there are very few optimisation studies for producing efficient hydraulic excavator backets. An efficient bucket can avoid unnecessary weight; greatly influence the payload and optimise the efficiency of hydraulic mining excavators. This paper presents a framework for the development of a scaled hydraulic excavator by examining the geometry and force relationships. A small hydraulic excavator was purchased and fitted with a broom scaled to a factor. Geometric and force relationships of the model were derived to assist computer instrumentation to retrieve necessary variable input for bucket design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Segmentation of novel or dynamic objects in a scene, often referred to as background sub- traction or foreground segmentation, is critical for robust high level computer vision applica- tions such as object tracking, object classifca- tion and recognition. However, automatic real- time segmentation for robotics still poses chal- lenges including global illumination changes, shadows, inter-re ections, colour similarity of foreground to background, and cluttered back- grounds. This paper introduces depth cues provided by structure from motion (SFM) for interactive segmentation to alleviate some of these challenges. In this paper, two prevailing interactive segmentation algorithms are com- pared; Lazysnapping [Li et al., 2004] and Grab- cut [Rother et al., 2004], both based on graph- cut optimisation [Boykov and Jolly, 2001]. The algorithms are extended to include depth cues rather than colour only as in the original pa- pers. Results show interactive segmentation based on colour and depth cues enhances the performance of segmentation with a lower er- ror with respect to ground truth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimal scheduling of voltage regulators (VRs), fixed and switched capacitors and voltage on customer side of transformer (VCT) along with the optimal allocaton of VRs and capacitors are performed using a hybrid optimisation method based on discrete particle swarm optimisation and genetic algorithm. Direct optimisation of the tap position is not appropriate since in general the high voltage (HV) side voltage is not known. Therefore, the tap setting can be determined give the optimal VCT once the HV side voltage is known. The objective function is composed of the distribution line loss cost, the peak power loss cost and capacitors' and VRs' capital, operation and maintenance costs. The constraints are limits on bus voltage and feeder current along with VR taps. The bus voltage should be maintained within the standard level and the feeder current should not exceed the feeder-rated current. The taps are to adjust the output voltage of VRs between 90 and 110% of their input voltages. For validation of the proposed method, the 18-bus IEEE system is used. The results are compared with prior publications to illustrate the benefit of the employed technique. The results also show that the lowest cost planning for voltage profile will be achieved if a combination of capacitors, VRs and VCTs is considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliable infrastructure assets impact significantly on quality of life and provide a stable foundation for economic growth and competitiveness. Decisions about the way assets are managed are of utmost importance in achieving this. Timely renewal of infrastructure assets supports reliability and maximum utilisation of infrastructure and enables business and community to grow and prosper. This research initially examined a framework for asset management decisions and then focused on asset renewal optimisation and renewal engineering optimisation in depth. This study had four primary objectives. The first was to develop a new Asset Management Decision Framework (AMDF) for identifying and classifying asset management decisions. The AMDF was developed by applying multi-criteria decision theory, classical management theory and life cycle management. The AMDF is an original and innovative contribution to asset management in that: · it is the first framework to provide guidance for developing asset management decision criteria based on fundamental business objectives; · it is the first framework to provide a decision context identification and analysis process for asset management decisions; and · it is the only comprehensive listing of asset management decision types developed from first principles. The second objective of this research was to develop a novel multi-attribute Asset Renewal Decision Model (ARDM) that takes account of financial, customer service, health and safety, environmental and socio-economic objectives. The unique feature of this ARDM is that it is the only model to optimise timing of asset renewal with respect to fundamental business objectives. The third objective of this research was to develop a novel Renewal Engineering Decision Model (REDM) that uses multiple criteria to determine the optimal timing for renewal engineering. The unique features of this model are that: · it is a novel extension to existing real options valuation models in that it uses overall utility rather than present value of cash flows to model engineering value; and · it is the only REDM that optimises timing of renewal engineering with respect to fundamental business objectives; The final objective was to develop and validate an Asset Renewal Engineering Philosophy (AREP) consisting of three principles of asset renewal engineering. The principles were validated using a novel application of real options theory. The AREP is the only renewal engineering philosophy in existence. The original contributions of this research are expected to enrich the body of knowledge in asset management through effectively addressing the need for an asset management decision framework, asset renewal and renewal engineering optimisation based on fundamental business objectives and a novel renewal engineering philosophy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To obtain minimum time or minimum energy trajectories for robots it is necessary to employ planning methods which adequately consider the platform’s dynamic properties. A variety of sampling, graph-based or local receding-horizon optimisation methods have previously been proposed. These typically use simplified kino-dynamic models to avoid the significant computational burden of solving this problem in a high dimensional state-space. In this paper we investigate solutions from the class of pseudospectral optimisation methods which have grown in favour amongst the optimal control community in recent years. These methods have high computational efficiency and rapid convergence properties. We present a practical application of such an approach to the robot path planning problem to provide a trajectory considering the robot’s dynamic properties. We extend the existing literature by augmenting the path constraints with sensed obstacles rather than predefined analytical functions to enable real world application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Any incident on motorways potentially can be followed by secondary crashes. Rear-end crashes also could happen as a result of queue formation downstream of high speed platoons. To decrease the occurrence of secondary crashes and rear-end crashes, Variable Speed Limits (VSL) can be applied to protect queue formed downstream. This paper focuses on fine tuning the Queue Protection algorithm of VSL. Three performance indicators: activation time, deactivation time and number of false alarms are selected to optimise the Queue Protection algorithm. A calibrated microscopic traffic simulation model of Pacific Motorway in Brisbane is used for the optimisation. Performance of VSL during an incident and heavy congestion and the benefit of VSL will be presented in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are many applications in aeronautics where there exist strong couplings between disciplines. One practical example is within the context of Unmanned Aerial Vehicle(UAV) automation where there exists strong coupling between operation constraints, aerodynamics, vehicle dynamics, mission and path planning. UAV path planning can be done either online or offline. The current state of path planning optimisation online UAVs with high performance computation is not at the same level as its ground-based offline optimizer's counterpart, this is mainly due to the volume, power and weight limitations on the UAV; some small UAVs do not have the computational power needed for some optimisation and path planning task. In this paper, we describe an optimisation method which can be applied to Multi-disciplinary Design Optimisation problems and UAV path planning problems. Hardware-based design optimisation techniques are used. The power and physical limitations of UAV, which may not be a problem in PC-based solutions, can be approached by utilizing a Field Programmable Gate Array (FPGA) as an algorithm accelerator. The inevitable latency produced by the iterative process of an Evolutionary Algorithm (EA) is concealed by exploiting the parallelism component within the dataflow paradigm of the EA on an FPGA architecture. Results compare software PC-based solutions and the hardware-based solutions for benchmark mathematical problems as well as a simple real world engineering problem. Results also indicate the practicality of the method which can be used for more complex single and multi objective coupled problems in aeronautical applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A number of Game Strategies (GS) have been developed in past decades. They have been used in the fields of economics, engineering, computer science and biology due to their efficiency in solving design optimization problems. In addition, research in multi-objective (MO) and multidisciplinary design optimization (MDO) has focused on developing robust and efficient optimization methods to produce a set of high quality solutions with low computational cost. In this paper, two optimization techniques are considered; the first optimization method uses multi-fidelity hierarchical Pareto optimality. The second optimization method uses the combination of two Game Strategies; Nash-equilibrium and Pareto optimality. The paper shows how Game Strategies can be hybridised and coupled to Multi-Objective Evolutionary Algorithms (MOEA) to accelerate convergence speed and to produce a set of high quality solutions. Numerical results obtained from both optimization methods are compared in terms of computational expense and model quality. The benefits of using Hybrid-Game Strategies are clearly demonstrated

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of adaptive wing/aerofoil designs is being considered, as they are promising techniques in aeronautic/ aerospace since they can reduce aircraft emissions and improve aerodynamic performance of manned or unmanned aircraft. This paper investigates the robust design and optimization for one type of adaptive techniques: active flow control bump at transonic flow conditions on a natural laminar flow aerofoil. The concept of using shock control bump is to control supersonic flow on the suction/pressure side of natural laminar flow aerofoil that leads to delaying shock occurrence (weakening its strength) or boundary layer separation. Such an active flow control technique reduces total drag at transonic speeds due to reduction of wave drag. The location of boundary-layer transition can influence the position and structure of the supersonic shock on the suction/pressure side of aerofoil. The boundarylayer transition position is considered as an uncertainty design parameter in aerodynamic design due to the many factors, such as surface contamination or surface erosion. This paper studies the shock-control-bump shape design optimization using robust evolutionary algorithms with uncertainty in boundary-layer transition locations. The optimization method is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing, and asynchronous evaluation. The use of adaptive wing/aerofoil designs is being considered, as they are promising techniques in aeronautic/ aerospace since they can reduce aircraft emissions and improve aerodynamic performance of manned or unmanned aircraft. This paper investigates the robust design and optimization for one type of adaptive techniques: active flow control bump at transonic flow conditions on a natural laminar flow aerofoil. The concept of using shock control bump is to control supersonic flow on the suction/pressure side of natural laminar flow aerofoil that leads to delaying shock occurrence (weakening its strength) or boundary-layer separation. Such an active flow control technique reduces total drag at transonic speeds due to reduction of wave drag. The location of boundary-layer transition can influence the position and structure of the supersonic shock on the suction/pressure side of aerofoil. The boundarylayer transition position is considered as an uncertainty design parameter in aerodynamic design due to the many factors, such as surface contamination or surface erosion. This paper studies the shock-control-bump shape design optimization using robust evolutionary algorithms with uncertainty in boundary-layer transition locations. The optimization method is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing, and asynchronous evaluation. Two test cases are conducted: the first test assumes the boundary-layer transition position is at 45% of chord from the leading edge, and the second test considers robust design optimization for the shock control bump at the variability of boundary-layer transition positions. The numerical result shows that the optimization method coupled to uncertainty design techniques produces Pareto optimal shock-control-bump shapes, which have low sensitivity and high aerodynamic performance while having significant total drag reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigates the application of two advanced optimization methods for solving active flow control (AFC) device shape design problem and compares their optimization efficiency in terms of computational cost and design quality. The first optimization method uses hierarchical asynchronous parallel multi-objective evolutionary algorithm and the second uses hybridized evolutionary algorithm with Nash-Game strategies (Hybrid-Game). Both optimization methods are based on a canonical evolution strategy and incorporate the concepts of parallel computing and asynchronous evaluation. One type of AFC device named shock control bump (SCB) is considered and applied to a natural laminar flow (NLF) aerofoil. The concept of SCB is used to decelerate supersonic flow on suction/pressure side of transonic aerofoil that leads to a delay of shock occurrence. Such active flow technique reduces total drag at transonic speeds which is of special interest to commercial aircraft. Numerical results show that the Hybrid-Game helps an EA to accelerate optimization process. From the practical point of view, applying a SCB on the suction and pressure sides significantly reduces transonic total drag and improves lift-to-drag (L/D) value when compared to the baseline design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ocean processes are dynamic, complex, and occur on multiple spatial and temporal scales. To obtain a synoptic view of such processes, ocean scientists collect data over long time periods. Historically, measurements were continually provided by fixed sensors, e.g., moorings, or gathered from ships. Recently, an increase in the utilization of autonomous underwater vehicles has enabled a more dynamic data acquisition approach. However, we still do not utilize the full capabilities of these vehicles. Here we present algorithms that produce persistent monitoring missions for underwater vehicles by balancing path following accuracy and sampling resolution for a given region of interest, which addresses a pressing need among ocean scientists to efficiently and effectively collect high-value data. More specifically, this paper proposes a path planning algorithm and a speed control algorithm for underwater gliders, which together give informative trajectories for the glider to persistently monitor a patch of ocean. We optimize a cost function that blends two competing factors: maximize the information value along the path, while minimizing deviation from the planned path due to ocean currents. Speed is controlled along the planned path by adjusting the pitch angle of the underwater glider, so that higher resolution samples are collected in areas of higher information value. The resulting paths are closed circuits that can be repeatedly traversed to collect long-term ocean data in dynamic environments. The algorithms were tested during sea trials on an underwater glider operating off the coast of southern California, as well as in Monterey Bay, California. The experimental results show significant improvements in data resolution and path reliability compared to previously executed sampling paths used in the respective regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.