925 resultados para Search-based algorithms


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This proposal shows that ACO systems can be applied to problems of requirements selection in software incremental development, with the idea of obtaining better results of those produced by expert judgment alone. The evaluation of the ACO systems should be done through a compared analysis with greedy and simulated annealing algorithms, performing experiments with some problems instances

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper is concerned with the hybridization of two graph coloring heuristics (Saturation Degree and Largest Degree), and their application within a hyperheuristic for exam timetabling problems. Hyper-heuristics can be seen as algorithms which intelligently select appropriate algorithms/heuristics for solving a problem. We developed a Tabu Search based hyper-heuristic to search for heuristic lists (of graph heuristics) for solving problems and investigated the heuristic lists found by employing knowledge discovery techniques. Two hybrid approaches (involving Saturation Degree and Largest Degree) including one which employs Case Based Reasoning are presented and discussed. Both the Tabu Search based hyper-heuristic and the hybrid approaches are tested on random and real-world exam timetabling problems. Experimental results are comparable with the best state-of-the-art approaches (as measured against established benchmark problems). The results also demonstrate an increased level of generality in our approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address the problem of constructing randomized online algorithms for the Metrical Task Systems (MTS) problem on a metric δ against an oblivious adversary. Restricting our attention to the class of “work-based” algorithms, we provide a framework for designing algorithms that uses the technique of regularization. For the case when δ is a uniform metric, we exhibit two algorithms that arise from this framework, and we prove a bound on the competitive ratio of each. We show that the second of these algorithms is ln n + O(loglogn) competitive, which is the current state-of-the art for the uniform MTS problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The mining environment presents a challenging prospect for stereo vision. Our objective is to produce a stereo vision sensor suited to close-range scenes consisting mostly of rocks. This sensor should produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this application. This paper compares a number of stereo matching algorithms in terms of robustness and suitability to fast implementation. These include traditional area-based algorithms, and algorithms based on non-parametric transforms, notably the rank and census transforms. Our experimental results show that the rank and census transforms are robust with respect to radiometric distortion and introduce less computational complexity than conventional area-based matching techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Vacuum circuit breaker (VCB) overvoltage failure and its catastrophic failures during shunt reactor switching have been analyzed through computer simulations for multiple reignitions with a statistical VCB model found in the literature. However, a systematic review (SR) that is related to the multiple reignitions with a statistical VCB model does not yet exist. Therefore, this paper aims to analyze and explore the multiple reignitions with a statistical VCB model. It examines the salient points, research gaps and limitations of the multiple reignition phenomenon to assist with future investigations following the SR search. Based on the SR results, seven issues and two approaches to enhance the current statistical VCB model are identified. These results will be useful as an input to improve the computer modeling accuracy as well as the development of a reignition switch model with point-on-wave controlled switching for condition monitoring

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Digital learning has come a long way from the days of simple 'if-then' queries. It is now enabled by countless innovations that support knowledge sharing, openness, flexibility, and independent inquiry. Set against an evolutionary context this study investigated innovations that directly support human inquiry. Specifically, it identified five activities that together are defined as the 'why dimension' – asking, learning, understanding, knowing, and explaining why. Findings highlight deficiencies in mainstream search-based approaches to inquiry, which tend to privilege the retrieval of information as distinct from explanation. Instrumental to sense-making, the 'why dimension' provides a conceptual framework for development of 'sense-making technologies'.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new algorithm for continuation of limit cycles of autonomous systems as a system parameter is varied. The algorithm works in phase space with an ordered set of points on the limit cycle, along with spline interpolation. Currently popular algorithms in bifurcation analysis packages compute time-domain approximations of limit cycles using either shooting or collocation. The present approach seems useful for continuation near saddle homoclinic points, where it encounters a corner while time-domain methods essentially encounter a discontinuity (a relatively short period of rapid variation). Other phase space-based algorithms use rescaled arclength in place of time, but subsequently resemble the time-domain methods. Compared to these, we introduce additional freedom through a variable stretching of arclength based on local curvature, through the use of an auxiliary index-based variable. Several numerical examples are presented. Comparisons with results from the popular package, MATCONT, are favorable close to saddle homoclinic points.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A lack of information on protein-protein interactions at the host-pathogen interface is impeding the understanding of the pathogenesis process. A recently developed, homology search-based method to predict protein-protein interactions is applied to the gastric pathogen, Helicobacter pylori to predict the interactions between proteins of H. pylori and human proteins in vitro. Many of the predicted interactions could potentially occur between the pathogen and its human host during pathogenesis as we focused mainly on the H. pylori proteins that have a transmembrane region or are encoded in the pathogenic island and those which are known to be secreted into the human host. By applying the homology search approach to protein-protein interaction databases DIP and iPfam, we could predict in vitro interactions for a total of 623 H. pylori proteins with 6559 human proteins. The predicted interactions include 549 hypothetical proteins of as yet unknown function encoded in the H. pylori genome and 13 experimentally verified secreted proteins. We have recognized 833 interactions involving the extracellular domains of transmembrane proteins of H. pylori. Structural analysis of some of the examples reveals that the interaction predicted by us is consistent with the structural compatibility of binding partners. Examples of interactions with discernible biological relevance are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Extraction of text areas from the document images with complex content and layout is one of the challenging tasks. Few texture based techniques have already been proposed for extraction of such text blocks. Most of such techniques are greedy for computation time and hence are far from being realizable for real time implementation. In this work, we propose a modification to two of the existing texture based techniques to reduce the computation. This is accomplished with Harris corner detectors. The efficiency of these two textures based algorithms, one based on Gabor filters and other on log-polar wavelet signature, are compared. A combination of Gabor feature based texture classification performed on a smaller set of Harris corner detected points is observed to deliver the accuracy and efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we are interested in high spectral efficiency multicode CDMA systems with large number of users employing single/multiple transmit antennas and higher-order modulation. In particular, we consider a local neighborhood search based multiuser detection algorithm which offers very good performance and complexity, suited for systems with large number of users employing M-QAM/M-PSK. We apply the algorithm on the chip matched filter output vector. We demonstrate near-single user (SU) performance of the algorithm in CDMA systems with large number of users using 4-QAM/16-QAM/64-QAM/8-PSK on AWGN, frequency-flat, and frequency-selective fading channels. We further show that the algorithm performs very well in multicode multiple-input multiple-output (MIMO) CDMA systems as well, outperforming other linear detectors and interference cancelers reported in the literature for such systems. The per-symbol complexity of the search algorithm is O(K2n2tn2cM), K: number of users, nt: number of transmit antennas at each user, nc: number of spreading codes multiplexed on each transmit antenna, M: modulation alphabet size, making the algorithm attractive for multiuser detection in large-dimension multicode MIMO-CDMA systems with M-QAM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Single-carrier frequency division multiple access (SC-FDMA) has become a popular alternative to orthogonal frequency division multiple access (OFDMA) in multiuser communication on the uplink. This is mainly due to the low peak-to-average power ratio (PAPR) of SC-FDMA compared to that of OFDMA. Long-term evolution (LTE) uses SC-FDMA on the uplink to exploit this PAPR advantage to reduce transmit power amplifier backoff in user terminals. In this paper, we show that SC-FDMA can be beneficially used for multiuser communication on the downlink as well. We present SC-FDMA transmit and receive signaling architectures for multiuser communication on the downlink. The benefits of using SC-FDMA on the downlink are that SC-FDMA can achieve i) significantly better bit error rate (BER) performance at the user terminal compared to OFDMA, and ii) improved PAPR compared to OFDMA which reduces base station (BS) power amplifier backoff (making BSs more green). SC-FDMA receiver needs to do joint equalization, which can be carried out using low complexity equalization techniques. For this, we present a local neighborhood search based equalization algorithm for SC-FDMA. This algorithm is very attractive both in complexity as well as performance. We present simulation results that establish the PAPR and BER performance advantage of SC-FDMA over OFDMA in multiuser SISO/MIMO downlink as well as in large-scale multiuser MISO downlink with tens to hundreds of antennas at the BS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quantum Computing is a relatively modern field which simulates quantum computation conditions. Moreover, it can be used to estimate which quasiparticles would endure better in a quantum environment. Topological Quantum Computing (TQC) is an approximation for reducing the quantum decoherence problem1, which is responsible for error appearance in the representation of information. This project tackles specific instances of TQC problems using MOEAs (Multi-objective Optimization Evolutionary Algorithms). A MOEA is a type of algorithm which will optimize two or more objectives of a problem simultaneously, using a population based approach. We have implemented MOEAs that use probabilistic procedures found in EDAs (Estimation of Distribution Algorithms), since in general, EDAs have found better solutions than ordinary EAs (Evolutionary Algorithms), even though they are more costly. Both, EDAs and MOEAs are population-based algorithms. The objective of this project was to use a multi-objective approach in order to find good solutions for several instances of a TQC problem. In particular, the objectives considered in the project were the error approximation and the length of a solution. The tool we used to solve the instances of the problem was the multi-objective framework PISA. Because PISA has not too much documentation available, we had to go through a process of reverse-engineering of the framework to understand its modules and the way they communicate with each other. Once its functioning was understood, we began working on a module dedicated to the braid problem. Finally, we submitted this module to an exhaustive experimentation phase and collected results.