13 resultados para Gravitational Search Algorithm
em Aston University Research Archive
Resumo:
Non-orthogonal multiple access (NOMA) is emerging as a promising multiple access technology for the fifth generation cellular networks to address the fast growing mobile data traffic. It applies superposition coding in transmitters, allowing simultaneous allocation of the same frequency resource to multiple intra-cell users. Successive interference cancellation is used at the receivers to cancel intra-cell interference. User pairing and power allocation (UPPA) is a key design aspect of NOMA. Existing UPPA algorithms are mainly based on exhaustive search method with extensive computation complexity, which can severely affect the NOMA performance. A fast proportional fairness (PF) scheduling based UPPA algorithm is proposed to address the problem. The novel idea is to form user pairs around the users with the highest PF metrics with pre-configured fixed power allocation. Systemlevel simulation results show that the proposed algorithm is significantly faster (seven times faster for the scenario with 20 users) with a negligible throughput loss than the existing exhaustive search algorithm.
Resumo:
This work follows a feasibility study (187) which suggested that a process for purifying wet-process phosphoric acid by solvent extraction should be economically viable. The work was divided into two main areas, (i) chemical and physical measurements on the three-phase system, with or without impurities; (ii) process simulation and optimization. The object was to test the process technically and economically and to optimise the type of solvent. The chemical equilibria and distribution curves for the system water - phosphoric acid - solvent for the solvents n-amyl alcohol, tri-n-butyl phosphate, di-isopropyl ether and methyl isobutyl ketone have been determined. Both pure phosphoric acid and acid containing known amounts of naturally occurring impurities (Fe P0 4 , A1P0 4 , Ca3(P04)Z and Mg 3(P0 4 )Z) were examined. The hydrodynamic characteristics of the systems were also studied. The experimental results obtained for drop size distribution were compared with those obtainable from Hinze's equation (32) and it was found that they deviated by an amount related to the turbulence. A comprehensive literature survey on the purification of wet-process phosphoric acid by organic solvents has been made. The literature regarding solvent extraction fundamentals and equipment and optimization methods for the envisaged process was also reviewed. A modified form of the Kremser-Brown and Souders equation to calculate the number of contact stages was derived. The modification takes into account the special nature of phosphoric acid distribution curves in the studied systems. The process flow-sheet was developed and simulated. Powell's direct search optimization method was selected in conjunction with the linear search algorithm of Davies, Swann and Campey. The objective function was defined as the total annual manufacturing cost and the program was employed to find the optimum operating conditions for anyone of the chosen solvents. The final results demonstrated the following order of feasibility to purify wet-process acid: di-isopropyl ether, methylisobutyl ketone, n-amyl alcohol and tri-n-butyl phosphate.
Resumo:
The reliability of the printed circuit board assembly under dynamic environments, such as those found onboard airplanes, ships and land vehicles is receiving more attention. This research analyses the dynamic characteristics of the printed circuit board (PCB) supported by edge retainers and plug-in connectors. By modelling the wedge retainer and connector as providing simply supported boundary condition with appropriate rotational spring stiffnesses along their respective edges with the aid of finite element codes, accurate natural frequencies for the board against experimental natural frequencies are obtained. For a PCB supported by two opposite wedge retainers and a plug-in connector and with its remaining edge free of any restraint, it is found that these real supports behave somewhere between the simply supported and clamped boundary conditions and provide a percentage fixity of 39.5% more than the classical simply supported case. By using an eigensensitivity method, the rotational stiffnesses representing the boundary supports of the PCB can be updated effectively and is capable of representing the dynamics of the PCB accurately. The result shows that the percentage error in the fundamental frequency of the PCB finite element model is substantially reduced from 22.3% to 1.3%. The procedure demonstrated the effectiveness of using only the vibration test frequencies as reference data when the mode shapes of the original untuned model are almost identical to the referenced modes/experimental data. When using only modal frequencies in model improvement, the analysis is very much simplified. Furthermore, the time taken to obtain the experimental data will be substantially reduced as the experimental mode shapes are not required.In addition, this thesis advocates a relatively simple method in determining the support locations for maximising the fundamental frequency of vibrating structures. The technique is simple and does not require any optimisation or sequential search algorithm in the analysis. The key to the procedure is to position the necessary supports at positions so as to eliminate the lower modes from the original configuration. This is accomplished by introducing point supports along the nodal lines of the highest possible mode from the original configuration, so that all the other lower modes are eliminated by the introduction of the new or extra supports to the structure. It also proposes inspecting the average driving point residues along the nodal lines of vibrating plates to find the optimal locations of the supports. Numerical examples are provided to demonstrate its validity. By applying to the PCB supported on its three sides by two wedge retainers and a connector, it is found that a single point constraint that would yield maximum fundamental frequency is located at the mid-point of the nodal line, namely, node 39. This point support has the effect of increasing the structure's fundamental frequency from 68.4 Hz to 146.9 Hz, or 115% higher.
Resumo:
This thesis presents research within empirical financial economics with focus on liquidity and portfolio optimisation in the stock market. The discussion on liquidity is focused on measurement issues, including TAQ data processing and measurement of systematic liquidity factors (FSO). Furthermore, a framework for treatment of the two topics in combination is provided. The liquidity part of the thesis gives a conceptual background to liquidity and discusses several different approaches to liquidity measurement. It contributes to liquidity measurement by providing detailed guidelines on the data processing needed for applying TAQ data to liquidity research. The main focus, however, is the derivation of systematic liquidity factors. The principal component approach to systematic liquidity measurement is refined by the introduction of moving and expanding estimation windows, allowing for time-varying liquidity co-variances between stocks. Under several liability specifications, this improves the ability to explain stock liquidity and returns, as compared to static window PCA and market average approximations of systematic liquidity. The highest ability to explain stock returns is obtained when using inventory cost as a liquidity measure and a moving window PCA as the systematic liquidity derivation technique. Systematic factors of this setting also have a strong ability in explaining a cross-sectional liquidity variation. Portfolio optimisation in the FSO framework is tested in two empirical studies. These contribute to the assessment of FSO by expanding the applicability to stock indexes and individual stocks, by considering a wide selection of utility function specifications, and by showing explicitly how the full-scale optimum can be identified using either grid search or the heuristic search algorithm of differential evolution. The studies show that relative to mean-variance portfolios, FSO performs well in these settings and that the computational expense can be mitigated dramatically by application of differential evolution.
Resumo:
This thesis presents a large scale numerical investigation of heterogeneous terrestrial optical communications systems and the upgrade of fourth generation terrestrial core to metro legacy interconnects to fifth generation transmission system technologies. Retrofitting (without changing infrastructure) is considered for commercial applications. ROADM are crucial enabling components for future core network developments however their re-routing ability means signals can be switched mid-link onto sub-optimally configured paths which raises new challenges in network management. System performance is determined by a trade-off between nonlinear impairments and noise, where the nonlinear signal distortions depend critically on deployed dispersion maps. This thesis presents a comprehensive numerical investigation into the implementation of phase modulated signals in transparent reconfigurable wavelength division multiplexed fibre optic communication terrestrial heterogeneous networks. A key issue during system upgrades is whether differential phase encoded modulation formats are compatible with the cost optimised dispersion schemes employed in current 10 Gb/s systems. We explore how robust transmission is to inevitable variations in the dispersion mapping and how large the margins are when suboptimal dispersion management is applied. We show that a DPSK transmission system is not drastically affected by reconfiguration from periodic dispersion management to lumped dispersion mapping. A novel DPSK dispersion map optimisation methodology which reduces drastically the optimisation parameter space and the many ways to deploy dispersion maps is also presented. This alleviates strenuous computing requirements in optimisation calculations. This thesis provides a very efficient and robust way to identify high performing lumped dispersion compensating schemes for use in heterogeneous RZ-DPSK terrestrial meshed networks with ROADMs. A modified search algorithm which further reduces this number of configuration combinations is also presented. The results of an investigation of the feasibility of detouring signals locally in multi-path heterogeneous ring networks is also presented.
Resumo:
With the eye-catching advances in sensing technologies, smart water networks have been attracting immense research interest in recent years. One of the most overarching tasks in smart water network management is the reduction of water loss (such as leaks and bursts in a pipe network). In this paper, we propose an efficient scheme to position water loss event based on water network topology. The state-of-the-art approach to this problem, however, utilizes the limited topology information of the water network, that is, only one single shortest path between two sensor locations. Consequently, the accuracy of positioning water loss events is still less desirable. To resolve this problem, our scheme consists of two key ingredients: First, we design a novel graph topology-based measure, which can recursively quantify the "average distances" for all pairs of senor locations simultaneously in a water network. This measure will substantially improve the accuracy of our positioning strategy, by capturing the entire water network topology information between every two sensor locations, yet without any sacrifice of computational efficiency. Then, we devise an efficient search algorithm that combines the "average distances" with the difference in the arrival times of the pressure variations detected at sensor locations. The viable experimental evaluations on real-world test bed (WaterWiSe@SG) demonstrate that our proposed positioning scheme can identify water loss event more accurately than the best-known competitor.
Resumo:
A chip shooter machine for electronic component assembly has a movable feeder carrier, a movable X–Y table carrying a printed circuit board (PCB), and a rotary turret with multiple assembly heads. This paper presents a hybrid genetic algorithm (HGA) to optimize the sequence of component placements and the arrangement of component types to feeders simultaneously for a chip shooter machine, that is, the component scheduling problem. The objective of the problem is to minimize the total assembly time. The GA developed in the paper hybridizes different search heuristics including the nearest-neighbor heuristic, the 2-opt heuristic, and an iterated swap procedure, which is a new improved heuristic. Compared with the results obtained by other researchers, the performance of the HGA is superior in terms of the assembly time. Scope and purpose When assembling the surface mount components on a PCB, it is necessary to obtain the optimal sequence of component placements and the best arrangement of component types to feeders simultaneously in order to minimize the total assembly time. Since it is very difficult to obtain the optimality, a GA hybridized with several search heuristics is developed. The type of machines being studied is the chip shooter machine. This paper compares the algorithm with a simple GA. It shows that the performance of the algorithm is superior to that of the simple GA in terms of the total assembly time.
Resumo:
This paper presents a hybrid genetic algorithm to optimize the sequence of component placements on a printed circuit board and the arrangement of component types to feeders simultaneously for a pick-and-place machine with multiple stationary feeders, a fixed board table and a movable placement head. The objective of the problem is to minimize the total travelling distance, or the travelling time, of the placement head. The genetic algorithm developed in the paper hybrisizes different search heuristics including the nearest neighbor heuristic, the 2-opt heuristic, and an iterated swap procedure, which is a new improving heuristic. Compared with the results obtained by other researchers, the performance of the hybrid genetic algorithm is superior to others in terms of the distance travelled by the placement head.
Resumo:
Background: Despite initial concerns about the sensitivity of the proposed diagnostic criteria for DSM-5 Autism Spectrum Disorder (ASD; e.g. Gibbs et al., 2012; McPartland et al., 2012), evidence is growing that the DSM-5 criteria provides an inclusive description with both good sensitivity and specificity (e.g. Frazier et al., 2012; Kent, Carrington et al., 2013). The capacity of the criteria to provide high levels of sensitivity and specificity comparable with DSM-IV-TR however relies on careful measurement to ensure that appropriate items from diagnostic instruments map onto the new DSM-5 descriptions.Objectives: To use an existing DSM-5 diagnostic algorithm (Kent, Carrington et .al., 2013) to identify a set of ‘essential’ behaviors sufficient to make a reliable and accurate diagnosis of DSM-5 Autism Spectrum Disorder (ASD) across age and ability level. Methods: Specific behaviors were identified and tested from the recently published DSM-5 algorithm for the Diagnostic Interview for Social and Communication Disorders (DISCO). Analyses were run on existing DISCO datasets, with a total participant sample size of 335. Three studies provided step-by-step development towards identification of a minimum set of items. Study 1 identified the most highly discriminating items (p<.0001). Study 2 used a lower selection threshold than in Study 1 (p<.05) to facilitate better representation of the full DSM-5 ASD profile. Study 3 included additional items previously reported as significantly more frequent in individuals with higher ability. The discriminant validity of all three item sets was tested using Receiver Operating Characteristic curves. Finally, sensitivity across age and ability was investigated in a subset of individuals with ASD (n=190).Results: Study 1 identified an item set (14 items) with good discriminant validity, but which predominantly measured social-communication behaviors (11/14). The Study 2 item set (48 items) better represented the DSM-5 ASD and had good discriminant validity, but the item set lacked sensitivity for individuals with higher ability. The final Study 3 adjusted item set (54 items) improved sensitivity for individuals with higher ability and performance and was comparable to the published DISCO DSM-5 algorithm.Conclusions: This work represents a first attempt to derive a reduced set of behaviors for DSM-5 directly from an existing standardized ASD developmental history interview. Further work involving existing ASD diagnostic tools with community-based and well characterized research samples will be required to replicate these findings and exploit their potential to contribute to a more efficient and focused ASD diagnostic process.
Resumo:
The objective of this study was to identify a set of 'essential' behaviours sufficient for diagnosis of DSM-5 Autism Spectrum Disorder (ASD). Highly discriminating, 'essential' behaviours were identified from the published DSM-5 algorithm developed for the Diagnostic Interview for Social and Communication Disorders (DISCO). Study 1 identified a reduced item set (48 items) with good predictive validity (as measured using receiver operating characteristic curves) that represented all symptom sub-domains described in the DSM-5 ASD criteria but lacked sensitivity for individuals with higher ability. An adjusted essential item set (54 items; Study 2) had good sensitivity when applied to individuals with higher ability and performance was comparable to the published full DISCO DSM-5 algorithm. Investigation at the item level revealed that the most highly discriminating items predominantly measured social-communication behaviours. This work represents a first attempt to derive a reduced set of behaviours for DSM-5 directly from an existing standardised ASD developmental history interview and has implications for the use of DSM-5 criteria for clinical and research practice. © 2014 The Authors.
Resumo:
This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.