270 resultados para particle swarm optimisation
Resumo:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
Resumo:
Spectrum sensing optimisation techniques maximise the efficiency of spectrum sensing while satisfying a number of constraints. Many optimisation models consider the possibility of the primary user changing activity state during the secondary user's transmission period. However, most ignore the possibility of activity change during the sensing period. The observed primary user signal during sensing can exhibit a duty cycle which has been shown to severely degrade detection performance. This paper shows that (a) the probability of state change during sensing cannot be neglected and (b) the true detection performance obtained when incorporating the duty cycle of the primary user signal can deviate significantly from the results expected with the assumption of no such duty cycle.
Resumo:
Fractures of long bones are sometimes treated using various types of fracture fixation devices including internal plate fixators. These are specialised plates which are used to bridge the fracture gap(s) whilst anatomically aligning the bone fragments. The plate is secured in position by screws. The aim of such a device is to support and promote the natural healing of the bone. When using an internal fixation device, it is necessary for the clinician to decide upon many parameters, for example, the type of plate and where to position it; how many and where to position the screws. While there have been a number of experimental and computational studies conducted regarding the configuration of screws in the literature, there is still inadequate information available concerning the influence of screw configuration on fracture healing. Because screw configuration influences the amount of flexibility at the area of fracture, it has a direct influence on the fracture healing process. Therefore, it is important that the chosen screw configuration does not inhibit the healing process. In addition to the impact on the fracture healing process, screw configuration plays an important role in the distribution of stresses in the plate due to the applied loads. A plate that experiences high stresses is prone to early failure. Hence, the screw configuration used should not encourage the occurrence of high stresses. This project develops a computational program in Fortran programming language to perform mathematical optimisation to determine the screw configuration of an internal fixation device within constraints of interfragmentary movement by minimising the corresponding stress in the plate. Thus, the optimal solution suggests the positioning and number of screws which satisfies the predefined constraints of interfragmentary movements. For a set of screw configurations the interfragmentary displacement and the stress occurring in the plate were calculated by the Finite Element Method. The screw configurations were iteratively changed and each time the corresponding interfragmentary displacements were compared with predefined constraints. Additionally, the corresponding stress was compared with the previously calculated stress value to determine if there was a reduction. These processes were continued until an optimal solution was achieved. The optimisation program has been shown to successfully predict the optimal screw configuration in two cases. The first case was a simplified bone construct whereby the screw configuration solution was comparable with those recommended in biomechanical literature. The second case was a femoral construct, of which the resultant screw configuration was shown to be similar to those used in clinical cases. The optimisation method and programming developed in this study has shown that it has potential to be used for further investigations with the improvement of optimisation criteria and the efficiency of the program.
Resumo:
A basic understanding of the relationships between rainfall intensity, duration of rainfall and the amount of suspended particles in stormwater runoff generated from road surfaces has been gained mainly from past washoff experiments using rainfall simulators. Simulated rainfall was generally applied at constant intensities, whereas rainfall temporal patterns during actual storms are typically highly variable. This paper discusses a rationale for the application of the constant-intensity washoff concepts to actual storm event runoff. The rationale is tested using suspended particle load data collected at a road site located in Toowoomba, Australia. Agreement between the washoff concepts and measured data is most consistent for intermediate-duration storms (duration <5 h and >1 h). Particle loads resulting from these storm events increase linearly with average rainfall intensity. Above a threshold intensity, there is evidence to suggest a constant or plateau particle load is reached. The inclusion of a peak discharge factor (maximum 6 min rainfall intensity) enhances the ability to predict particle loads.
Resumo:
In this paper we investigate the heuristic construction of bijective s-boxes that satisfy a wide range of cryptographic criteria including algebraic complexity, high nonlinearity, low autocorrelation and have none of the known weaknesses including linear structures, fixed points or linear redundancy. We demonstrate that the power mappings can be evolved (by iterated mutation operators alone) to generate bijective s-boxes with the best known tradeoffs among the considered criteria. The s-boxes found are suitable for use directly in modern encryption algorithms.
Resumo:
Single particle analysis (SPA) coupled with high-resolution electron cryo-microscopy is emerging as a powerful technique for the structure determination of membrane protein complexes and soluble macromolecular assemblies. Current estimates suggest that ∼104–105 particle projections are required to attain a 3 Å resolution 3D reconstruction (symmetry dependent). Selecting this number of molecular projections differing in size, shape and symmetry is a rate-limiting step for the automation of 3D image reconstruction. Here, we present SwarmPS, a feature rich GUI based software package to manage large scale, semi-automated particle picking projects. The software provides cross-correlation and edge-detection algorithms. Algorithm-specific parameters are transparently and automatically determined through user interaction with the image, rather than by trial and error. Other features include multiple image handling (∼102), local and global particle selection options, interactive image freezing, automatic particle centering, and full manual override to correct false positives and negatives. SwarmPS is user friendly, flexible, extensible, fast, and capable of exporting boxed out projection images, or particle coordinates, compatible with downstream image processing suites.
Resumo:
There is worldwide interest in reducing aircraft emissions. The difficulty of reducing emissions including water vapour, carbon dioxide (CO2) and oxides of nitrogen (NOx) is mainly due from the fact that a commercial aircraft is usually designed for a particular optimal cruise altitude but may be requested or required to operate and deviate at different altitude and speeds to archive a desired or commanded flight plan, resulting in increased emissions. This is a multi- disciplinary problem with multiple trade-offs such as optimising engine efficiency, minimising fuel burnt, minimise emissions while maintaining aircraft separation and air safety. This project presents the coupling of an advanced optimisation technique with mathematical models and algorithms for aircraft emission reduction through flight optimisation. Numerical results show that the method is able to capture a set of useful trade-offs between aircraft range and NOx, and mission fuel consumption and NOx. In addition, alternative cruise operating conditions including Mach and altitude that produce minimum NOx and CO2 (minimum mission fuel weight) are suggested.
Resumo:
This paper investigates the field programmable gate array (FPGA) approach for multi-objective and multi-disciplinary design optimisation (MDO) problems. One class of optimisation method that has been well-studied and established for large and complex problems, such as those inherited in MDO, is multi-objective evolutionary algorithms (MOEAs). The MOEA, nondominated sorting genetic algorithm II (NSGA-II), is hardware implemented on an FPGA chip. The NSGA-II on FPGA application to multi-objective test problem suites has verified the designed implementation effectiveness. Results show that NSGA-II on FPGA is three orders of magnitude better than the PC based counterpart.
Resumo:
Abstract—Computational Intelligence Systems (CIS) is one of advanced softwares. CIS has been important position for solving single-objective / reverse / inverse and multi-objective design problems in engineering. The paper hybridise a CIS for optimisation with the concept of Nash-Equilibrium as an optimisation pre-conditioner to accelerate the optimisation process. The hybridised CIS (Hybrid Intelligence System) coupled to the Finite Element Analysis (FEA) tool and one type of Computer Aided Design(CAD) system; GiD is applied to solve an inverse engineering design problem; reconstruction of High Lift Systems (HLS). Numerical results obtained by the hybridised CIS are compared to the results obtained by the original CIS. The benefits of using the concept of Nash-Equilibrium are clearly demonstrated in terms of solution accuracy and optimisation efficiency.
Resumo:
There are many applications in aeronautical/aerospace engineering where some values of the design parameters states cannot be provided or determined accurately. These values can be related to the geometry(wingspan, length, angles) and or to operational flight conditions that vary due to the presence of uncertainty parameters (Mach, angle of attack, air density and temperature, etc.). These uncertainty design parameters cannot be ignored in engineering design and must be taken into the optimisation task to produce more realistic and reliable solutions. In this paper, a robust/uncertainty design method with statistical constraints is introduced to produce a set of reliable solutions which have high performance and low sensitivity. Robust design concept coupled with Multi Objective Evolutionary Algorithms (MOEAs) is defined by applying two statistical sampling formulas; mean and variance/standard deviation associated with the optimisation fitness/objective functions. The methodology is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. It is implemented for two practical Unmanned Aerial System (UAS) design problems; the flrst case considers robust multi-objective (single disciplinary: aerodynamics) design optimisation and the second considers a robust multidisciplinary (aero structures) design optimisation. Numerical results show that the solutions obtained by the robust design method with statistical constraints have a more reliable performance and sensitivity in both aerodynamics and structures when compared to the baseline design.