357 resultados para Proximal Point Algorithm

em Queensland University of Technology - ePrints Archive


Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the progressive exhaustion of fossil energy and the enhanced awareness of environmental protection, more attention is being paid to electric vehicles (EVs). Inappropriate siting and sizing of EV charging stations could have negative effects on the development of EVs, the layout of the city traffic network, and the convenience of EVs' drivers, and lead to an increase in network losses and a degradation in voltage profiles at some nodes. Given this background, the optimal sites of EV charging stations are first identified by a two-step screening method with environmental factors and service radius of EV charging stations considered. Then, a mathematical model for the optimal sizing of EV charging stations is developed with the minimization of total cost associated with EV charging stations to be planned as the objective function and solved by a modified primal-dual interior point algorithm (MPDIPA). Finally, simulation results of the IEEE 123-node test feeder have demonstrated that the developed model and method cannot only attain the reasonable planning scheme of EV charging stations, but also reduce the network loss and improve the voltage profile.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A newly developed middle-frequency (2 MHz) inductively coupled plasma (ICP) source with internal oscillating current is used to treat biodegradable food packaging surfaces. Initially hydrophilic packaging turns to hydrophobic after being processed by ICP plasma. The investigation of optical emission from hydrocarbon radicals in the Ar/ CH4 plasma helps us to understand the property of the hydrophobicity of the surfaces. © 2008 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this study, the authors propose a novel video stabilisation algorithm for mobile platforms with moving objects in the scene. The quality of videos obtained from mobile platforms, such as unmanned airborne vehicles, suffers from jitter caused by several factors. In order to remove this undesired jitter, the accurate estimation of global motion is essential. However it is difficult to estimate global motions accurately from mobile platforms due to increased estimation errors and noises. Additionally, large moving objects in the video scenes contribute to the estimation errors. Currently, only very few motion estimation algorithms have been developed for video scenes collected from mobile platforms, and this paper shows that these algorithms fail when there are large moving objects in the scene. In this study, a theoretical proof is provided which demonstrates that the use of delta optical flow can improve the robustness of video stabilisation in the presence of large moving objects in the scene. The authors also propose to use sorted arrays of local motions and the selection of feature points to separate outliers from inliers. The proposed algorithm is tested over six video sequences, collected from one fixed platform, four mobile platforms and one synthetic video, of which three contain large moving objects. Experiments show our proposed algorithm performs well to all these video sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The railway service is now the major transportation means in most of the countries around the world. With the increasing population and expanding commercial and industrial activities, a high quality of railway service is the most desirable. Train service usually varies with the population activities throughout a day and train coordination and service regulation are then expected to meet the daily passengers' demand. Dwell time control at stations and fixed coasting point in an inter-station run are the current practices to regulate train service in most metro railway systems. However, a flexible and efficient train control and operation is not always possible. To minimize energy consumption of train operation and make certain compromises on the train schedule, coast control is an economical approach to balance run-time and energy consumption in railway operation if time is not an important issue, particularly at off-peak hours. The capability to identify the starting point for coasting according to the current traffic conditions provides the necessary flexibility for train operation. This paper presents an application of genetic algorithms (GA) to search for the appropriate coasting point(s) and investigates the possible improvement on fitness of genes. Single and multiple coasting point control with simple GA are developed to attain the solutions and their corresponding train movement is examined. Further, a hierarchical genetic algorithm (HGA) is introduced here to identify the number of coasting points required according to the traffic conditions, and Minimum-Allele-Reserve-Keeper (MARK) is adopted as a genetic operator to achieve fitter solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Balancing between the provision of high quality of service and running within a tight budget is one of the biggest challenges for most metro railway operators around the world. Conventionally, one possible approach for the operator to adjust the time schedule is to alter the stop time at stations, if other system constraints, such as traction equipment characteristic, are not taken into account. Yet it is not an effective, flexible and economical method because the run-time of a train simply cannot be extended without limitation, and a balance between run-time and energy consumption has to be maintained. Modification or installation of a new signalling system not only increases the capital cost, but also affects the normal train service. Therefore, in order to procure a more effective, flexible and economical means to improve the quality of service, optimisation of train performance by coasting point identification has become more attractive and popular. However, identifying the necessary starting points for coasting under the constraints of current service conditions is no simple task because train movement is attributed by a large number of factors, most of which are non-linear and inter-dependent. This paper presents an application of genetic algorithms (GA) to search for the appropriate coasting points and investigates the possible improvement on computation time and fitness of genes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Circuit-breakers (CBs) are subject to electrical stresses with restrikes during capacitor bank operation. Stresses are caused by the overvoltages across CBs, the interrupting currents and the rate of rise of recovery voltage (RRRV). Such electrical stresses also depend on the types of system grounding and the types of dielectric strength curves. The aim of this study is to demonstrate a restrike waveform predictive model for a SF6 CB that considered the types of system grounding: grounded and non-grounded and the computation accuracy comparison on the application of the cold withstand dielectric strength and the hot recovery dielectric strength curve including the POW (point-on-wave) recommendations to make an assessment of increasing the CB remaining life. The simulation of SF6 CB stresses in a typical 400 kV system was undertaken and the results in the applications are presented. The simulated restrike waveforms produced with the identified features using wavelet transform can be used for restrike diagnostic algorithm development with wavelet transform to locate a substation with breaker restrikes. This study found that the hot withstand dielectric strength curve has less magnitude than the cold withstand dielectric strength curve for restrike simulation results. Computation accuracy improved with the hot withstand dielectric strength and POW controlled switching can increase the life for a SF6 CB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently the application of the quasi-steady-state approximation (QSSA) to the stochastic simulation algorithm (SSA) was suggested for the purpose of speeding up stochastic simulations of chemical systems that involve both relatively fast and slow chemical reactions [Rao and Arkin, J. Chem. Phys. 118, 4999 (2003)] and further work has led to the nested and slow-scale SSA. Improved numerical efficiency is obtained by respecting the vastly different time scales characterizing the system and then by advancing only the slow reactions exactly, based on a suitable approximation to the fast reactions. We considerably extend these works by applying the QSSA to numerical methods for the direct solution of the chemical master equation (CME) and, in particular, to the finite state projection algorithm [Munsky and Khammash, J. Chem. Phys. 124, 044104 (2006)], in conjunction with Krylov methods. In addition, we point out some important connections to the literature on the (deterministic) total QSSA (tQSSA) and place the stochastic analogue of the QSSA within the more general framework of aggregation of Markov processes. We demonstrate the new methods on four examples: Michaelis–Menten enzyme kinetics, double phosphorylation, the Goldbeter–Koshland switch, and the mitogen activated protein kinase cascade. Overall, we report dramatic improvements by applying the tQSSA to the CME solver.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A composite SaaS (Software as a Service) is a software that is comprised of several software components and data components. The composite SaaS placement problem is to determine where each of the components should be deployed in a cloud computing environment such that the performance of the composite SaaS is optimal. From the computational point of view, the composite SaaS placement problem is a large-scale combinatorial optimization problem. Thus, an Iterative Cooperative Co-evolutionary Genetic Algorithm (ICCGA) was proposed. The ICCGA can find reasonable quality of solutions. However, its computation time is noticeably slow. Aiming at improving the computation time, we propose an unsynchronized Parallel Cooperative Co-evolutionary Genetic Algorithm (PCCGA) in this paper. Experimental results have shown that the PCCGA not only has quicker computation time, but also generates better quality of solutions than the ICCGA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary objective of this study is to develop a robust queue estimation algorithm for motorway on-ramps. Real-time queue information is the most vital input for a dynamic queue management that can treat long queues on metered on-ramps more sophistically. The proposed algorithm is developed based on the Kalman filter framework. The fundamental conservation model is used to estimate the system state (queue size) with the flow-in and flow-out measurements. This projection results are updated with the measurement equation using the time occupancies from mid-link and link-entrance loop detectors. This study also proposes a novel single point correction method. This method resets the estimated system state to eliminate the counting errors that accumulate over time. In the performance evaluation, the proposed algorithm demonstrated accurate and reliable performances and consistently outperformed the benchmarked Single Occupancy Kalman filter (SOKF) method. The improvements over SOKF are 62% and 63% in average in terms of the estimation accuracy (MAE) and reliability (RMSE), respectively. The benefit of the innovative concepts of the algorithm is well justified by the improved estimation performance in the congested ramp traffic conditions where long queues may significantly compromise the benchmark algorithm’s performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several approaches have been introduced in literature for active noise control (ANC) systems. Since FxLMS algorithm appears to be the best choice as a controller filter, researchers tend to improve performance of ANC systems by enhancing and modifying this algorithm. This paper proposes a new version of FxLMS algorithm. In many ANC applications an online secondary path modelling method using a white noise as a training signal is required to ensure convergence of the system. This paper also proposes a new approach for online secondary path modelling in feedfoward ANC systems. The proposed algorithm stops injection of the white noise at the optimum point and reactivate the injection during the operation, if needed, to maintain performance of the system. Benefiting new version of FxLMS algorithm and not continually injection of white noise makes the system more desirable and improves the noise attenuation performance. Comparative simulation results indicate effectiveness of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generalization of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics. Also, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement. The comparison results show that the computation using our mapper/reducer placement is much cheaper while still satisfying the computation deadline.