38 resultados para Infeasible solution space search

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous papers have noted the difficulty in obtaining neural models which are stable under simulation when trained using prediction-error-based methods. Here the differences between series-parallel and parallel identification structures for training neural models are investigated. The effect of the error surface shape on training convergence and simulation performance is analysed using a standard algorithm operating in both training modes. A combined series-parallel/parallel training scheme is proposed, aiming to provide a more effective means of obtaining accurate neural simulation models. Simulation examples show the combined scheme is advantageous in circumstances where the solution space is known or suspected to be complex. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clean and renewable energy generation and supply has drawn much attention worldwide in recent years, the proton exchange membrane (PEM) fuel cells and solar cells are among the most popular technologies. Accurately modeling the PEM fuel cells as well as solar cells is critical in their applications, and this involves the identification and optimization of model parameters. This is however challenging due to the highly nonlinear and complex nature of the models. In particular for PEM fuel cells, the model has to be optimized under different operation conditions, thus making the solution space extremely complex. In this paper, an improved and simplified teaching-learning based optimization algorithm (STLBO) is proposed to identify and optimize parameters for these two types of cell models. This is achieved by introducing an elite strategy to improve the quality of population and a local search is employed to further enhance the performance of the global best solution. To improve the diversity of the local search a chaotic map is also introduced. Compared with the basic TLBO, the structure of the proposed algorithm is much simplified and the searching ability is significantly enhanced. The performance of the proposed STLBO is firstly tested and verified on two low dimension decomposable problems and twelve large scale benchmark functions, then on the parameter identification of PEM fuel cell as well as solar cell models. Intensive experimental simulations show that the proposed STLBO exhibits excellent performance in terms of the accuracy and speed, in comparison with those reported in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel approach based on the use of evolutionary agents for epipolar geometry estimation. In contrast to conventional nonlinear optimization methods, the proposed technique employs each agent to denote a minimal subset to compute the fundamental matrix, and considers the data set of correspondences as a 1D cellular environment, in which the agents inhabit and evolve. The agents execute some evolutionary behavior, and evolve autonomously in a vast solution space to reach the optimal (or near optima) result. Then three different techniques are proposed in order to improve the searching ability and computational efficiency of the original agents. Subset template enables agents to collaborate more efficiently with each other, and inherit accurate information from the whole agent set. Competitive evolutionary agent (CEA) and finite multiple evolutionary agent (FMEA) apply a better evolutionary strategy or decision rule, and focus on different aspects of the evolutionary process. Experimental results with both synthetic data and real images show that the proposed agent-based approaches perform better than other typical methods in terms of accuracy and speed, and are more robust to noise and outliers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers the optimal design of fabricated steel beams for long-span portal frames. The design optimisation takes into account ultimate as well as serviceability limit states, adopting deflection limits recommended by the Steel Construction Institute (SCI). Results for three benchmark frames demonstrate the efficiency of the optimisation methodology. A genetic algorithm (GA) was used to optimise the dimensions of the plates used for the columns, rafters and haunches. Discrete decision variables were adopted for the thickness of the steel plates and continuous variables for the breadth and depth of the plates. Strategies were developed to enhance the performance of the GA including solution space reduction and a hybrid initial population half of which is derived using Latin hypercube sampling. The results show that the proposed GA-based optimisation model generates optimal and near-optimal solutions consistently. A parametric study is then conducted on frames of different spans. A significant variation in weight between fabricated and conventional hot-rolled steel portal frames is shown; for a 50 m span frame, a 14–19% saving in weight was achieved. Furthermore, since Universal Beam sections in the UK come from a discrete section library, the results could also provide overall dimensions of other beams that could be more efficient for portal frames. Eurocode 3 was used for illustrative purposes; any alternative code of practice may be used.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their non-deterministic performance. Although CAMs are favoured by technology vendors due to their deterministic high lookup rates, they suffer from the problems of high power dissipation and high silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multi-level cutting the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A technique for automatic exploration of the genetic search region through fuzzy coding (Sharma and Irwin, 2003) has been proposed. Fuzzy coding (FC) provides the value of a variable on the basis of the optimum number of selected fuzzy sets and their effectiveness in terms of degree-of-membership. It is an indirect encoding method and has been shown to perform better than other conventional binary, Gray and floating-point encoding methods. However, the static range of the membership functions is a major problem in fuzzy coding, resulting in longer times to arrive at an optimum solution in large or complicated search spaces. This paper proposes a new algorithm, called fuzzy coding with a dynamic range (FCDR), which dynamically allocates the range of the variables to evolve an effective search region, thereby achieving faster convergence. Results are presented for two benchmark optimisation problems, and also for a case study involving neural identification of a highly non-linear pH neutralisation process from experimental data. It is shown that dynamic exploration of the genetic search region is effective for parameter optimisation in problems where the search space is complicated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A preliminary search for stars that may have formed coevally with the apparently young halo B-type star PHL 346 has been performed with the 2dF multifibre spectrograph on the Anglo- Australian Telescope (AAT). Candidates were selected for spectroscopy from APM scans of B and R Schmidt plates centred on PHL 346. A total of 476 stars of spectral type A or F were found; radial velocity estimates and more accurate spectral type assignments narrowed the number of possible coeval candidates to 6 A-type and 14 F-type stars. A statistical analysis of these results using a comparison with a control field suggests that the number of A-type or F-type candidate stars around PHL 346 is not unexpected, and that they need not be associated with PHL 346. A number of ways to improve the project are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

19 B-type stars, selected from the Palomar-Green Survey, have been observed at infrared wavelengths to search for possible infrared excesses, as part of an ongoing programme to investigate the nature of early-type stars at high Galactic latitudes. The resulting infrared fluxes, along with Stromgren photometry, are compared with theoretical flux profiles to determine whether any of the targets show evidence of circumstellar material, which may be indicative of post-main- sequence evolution. Eighteen of the targets have flux distributions in good agreement with theoretical predictions. However, one star, PG 2120 + 062, shows a small near-infrared excess, which may be due either to a cool companion of spectral type F5-F7, or to circumstellar material, indicating that it may be an evolved object such as a post-asymptotic giant branch star, in the transition region between the asymptotic giant branch and planetary nebula phase, with the infrared excess due to recent mass loss during giant branch evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results of the search of the periodic changes of the 530.3 nm line intensity emitted by selected structures of the solar corona in the frequency range 1-10 Hz are presented. A set of 12 728 images of the section of the solar corona extending from near the north pole to the south-west were taken simultaneously in the 530.3 nm ("green") line and white-light with the Solar Eclipse Coronal Imaging System (SECIS) during the 143-seconds- long totality of the 1999 August 11 solar eclipse observed in Shabla, Bulgaria. The time resolution of the collected data is better than 0.05 s and the pixel size is approximately 4 arcsec. Using classical Fourier spectral analysis tools, we investigated temporal changes of the local 530.3 nm coronal line brightness in the frequency range 1-10 Hz of thousands of points within the field of view. The various photometric and instrumental effects have been extensively considered. We did not find any indisputable, statistically significant evidence of periodicities in any of the investigated points (at significance level alpha = 0.05).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A standard problem within universities is that of teaching space allocation which can be thought of as the assignment of rooms and times to various teaching activities. The focus is usually on courses that are expected to fit into one room. However, it can also happen that the course will need to be broken up, or ‘split’, into multiple sections. A lecture might be too large to fit into any one room. Another common example is that of seminars or tutorials. Although hundreds of students may be enrolled on a course, it is often subdivided into particular types and sizes of events dependent on the pedagogic requirements of that particular course. Typically, decisions as to how to split courses need to be made within the context of limited space requirements. Institutions do not have an unlimited number of teaching rooms, and need to effectively use those that they do have. The efficiency of space usage is usually measured by the overall ‘utilisation’ which is basically the fraction of the available seat-hours that are actually used. A multi-objective optimisation problem naturally arises; with a trade-off between satisfying preferences on splitting, a desire to increase utilisation, and also to satisfy other constraints such as those based on event location and timetabling conflicts. In this paper, we explore such trade-offs. The explorations themselves are based on a local search method that attempts to optimise the space utilisation by means of a ‘dynamic splitting’ strategy. The local moves are designed to improve utilisation and satisfy the other constraints, but are also allowed to split, and un-split, courses so as to simultaneously meet the splitting objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new search-space-updating technique for genetic algorithms is proposed for continuous optimisation problems. Other than gradually reducing the search space during the evolution process with a fixed reduction rate set ‘a priori’, the upper and the lower boundaries for each variable in the objective function are dynamically adjusted based on its distribution statistics. To test the effectiveness, the technique is applied to a number of benchmark optimisation problems in comparison with three other techniques, namely the genetic algorithms with parameter space size adjustment (GAPSSA) technique [A.B. Djurišic, Elite genetic algorithms with adaptive mutations for solving continuous optimization problems – application to modeling of the optical constants of solids, Optics Communications 151 (1998) 147–159], successive zooming genetic algorithm (SZGA) [Y. Kwon, S. Kwon, S. Jin, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Computers and Structures 81 (2003) 1715–1725] and a simple GA. The tests show that for well-posed problems, existing search space updating techniques perform well in terms of convergence speed and solution precision however, for some ill-posed problems these techniques are statistically inferior to a simple GA. All the tests show that the proposed new search space update technique is statistically superior to its counterparts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their nondeterministic performance. Although content addressable memories (CAMs) are favoured by technology vendors due to their deterministic high-lookup rates, they suffer from the problems of high-power consumption and high-silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multilevel cutting of the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Universities planning the provision of space for their teaching requirements need to do so in a fashion that reduces capital and maintenance costs whilst still providing a high-quality level of service. Space plans should aim to provide sufficient capacity without incurring excessive costs due to over-capacity. A simple measure used to estimate over-provision is utilisation. Essentially, the utilisation is the fraction of seats that are used in practice, or the ratio of demand to supply. However, studies usually find that utilisation is low, often only 20–40%, and this is suggestive of significant over-capacity.

Our previous work has provided methods to improve such space planning. They identify a critical level of utilisation as the highest level that can be achieved whilst still reliably satisfying the demand for places to allocate teaching events. In this paper, we extend this body of work to incorporate the notions of event-types and space-types. Teaching events have multiple ‘event-types’, such as lecture, tutorial, workshop, etc., and there are generally corresponding space-types. Matching the type of an event to a room of a corresponding space-type is generally desirable. However, realistically, allocation happens in a mixed space-type environment where teaching events of a given type are allocated to rooms of another space-type; e.g., tutorials will borrow lecture theatres or workshop rooms.

We propose a model and methodology to quantify the effects of space-type mixing and establish methods to search for better space-type profiles; where the term “space-type profile” refers to the relative numbers of each type of space. We give evidence that these methods have the potential to improve utilisation levels. Hence, the contribution of this paper is twofold. Firstly, we present informative studies of the effects of space-type mixing on utilisation, and critical utilisations. Secondly, we present straightforward though novel methods to determine better space-type profiles, and give an example in which the resulting profiles are indeed significantly improved.