872 resultados para direct search optimization algorithm
Resumo:
This paper elaborates the routing of cable cycle through available routes in a building in order to link a set of devices, in a most reasonable way. Despite of the similarities to other NP-hard routing problems, the only goal is not only to minimize the cost (length of the cycle) but also to increase the reliability of the path (in case of a cable cut) which is assessed by a risk factor. Since there is often a trade-off between the risk and length factors, a criterion for ranking candidates and deciding the most reasonable solution is defined. A set of techniques is proposed to perform an efficient and exact search among candidates. A novel graph is introduced to reduce the search-space, and navigate the search toward feasible and desirable solutions. Moreover, admissible heuristic length estimation helps to early detection of partial cycles which lead to unreasonable solutions. The results show that the method provides solutions which are both technically and financially reasonable. Furthermore, it is proved that the proposed techniques are very efficient in reducing the computational time of the search to a reasonable amount.
Resumo:
The goal of a research programme Evidence Algorithm is a development of an open system of automated proving that is able to accumulate mathematical knowledge and to prove theorems in a context of a self-contained mathematical text. By now, the first version of such a system called a System for Automated Deduction, SAD, is implemented in software. The system SAD possesses the following main features: mathematical texts are formalized using a specific formal language that is close to a natural language of mathematical publications; a proof search is based on special sequent-type calculi formalizing natural reasoning style, such as application of definitions and auxiliary propositions. These calculi also admit a separation of equality handling from deduction that gives an opportunity to integrate logical reasoning with symbolic calculation.
Resumo:
This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.
Resumo:
This paper describes the formulation of a Multi-objective Pipe Smoothing Genetic Algorithm (MOPSGA) and its application to the least cost water distribution network design problem. Evolutionary Algorithms have been widely utilised for the optimisation of both theoretical and real-world non-linear optimisation problems, including water system design and maintenance problems. In this work we present a pipe smoothing based approach to the creation and mutation of chromosomes which utilises engineering expertise with the view to increasing the performance of the algorithm whilst promoting engineering feasibility within the population of solutions. MOPSGA is based upon the standard Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and incorporates a modified population initialiser and mutation operator which directly targets elements of a network with the aim to increase network smoothness (in terms of progression from one diameter to the next) using network element awareness and an elementary heuristic. The pipe smoothing heuristic used in this algorithm is based upon a fundamental principle employed by water system engineers when designing water distribution pipe networks where the diameter of any pipe is never greater than the sum of the diameters of the pipes directly upstream resulting in the transition from large to small diameters from source to the extremities of the network. MOPSGA is assessed on a number of water distribution network benchmarks from the literature including some real-world based, large scale systems. The performance of MOPSGA is directly compared to that of NSGA-II with regard to solution quality, engineering feasibility (network smoothness) and computational efficiency. MOPSGA is shown to promote both engineering and hydraulic feasibility whilst attaining good infrastructure costs compared to NSGA-II.
Resumo:
Lawrance (1991) has shown, through the estimation of consumption Euler equations, that subjective rates of impatience (time preference) in the U.S. are three to Öve percentage points higher for households with lower average labor incomes than for those with higher labor income. From a theoretical perspective, the sign of this correlation in a job-search model seems at Örst to be undetermined, since more impatient workers tend to accept wage o§ers that less impatient workers would not, thereby remaining less time unemployed. The main result of this paper is showing that, regardless of the existence of e§ects of opposite sign, and independently of the particular speciÖcations of the givens of the model, less impatient workers always end up, in the long run, with a higher average income. The result is based on the (unique) invariant Markov distribution of wages associated with the dynamic optimization problem solved by the consumers. An example is provided to illustrate the method.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
This paper investigates the impact of foreign direct investment on the productivity performance of domestic firms in Portugal. The data comprise nine manufacturing sectors for the period 1992-95. Relatively to previous studies, model specification is improved by taking into consideration several aspects: the influence of the “technological gap” on spill-overs diffusion and the choice of its most appropriate interval; sectoral variation in the coefficients of the spill-overs effect; identification of constant, idiosyncratic sectoral factors by means of a fixed effects model; and the search for inter-sectoral positive spillover effects. The relationship between domestic firms productivity and the foreign presence does take place in a positive way, only if a proper technology differential between the foreign and domestic producers exists and the sectoral characteristics are favourable. In broad terms, spillovers diffusion is associated to modern industries in which the foreign owned establishments have a clear, but not too sharp, edge on the domestic ones. Besides, other specific sectoral influences can be pertinent; agglomerative location factors being one example.
Resumo:
We investigate the impact of foreign direct investment on the productivity of domestic firms, using sectoral data for Portugal. An improved analysis takes into account the most appropriate interval for the technological gap between foreign and domestic firms. Sectoral variation of spillovers, idiosyncratic sectoral factors and the search for inter-sectoral effects provide new insights on the subject. Significant spillovers require a proper technology differential between the foreign and domestic producers and favourable sectoral characteristics. Broadly, they occur in modern industries in which foreign firms have a clear, but not too sharp, edge on the domestic ones. Agglomeration effects are also identified as pertinent specific influences.
Resumo:
In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.
Resumo:
FERNANDES, Fabiano A. N. et al. Optimization of Osmotic Dehydration of Papaya of followed by air-drying. Food Research Internation, v. 39, p. 492-498, 2006.
Resumo:
In this dissertation, the theoretical principles governing the molecular modeling were applied for electronic characterization of oligopeptide α3 and its variants (5Q, 7Q)-α3, as well as in the quantum description of the interaction of the aminoglycoside hygromycin B and the 30S subunit of bacterial ribosome. In the first study, the linear and neutral dipeptides which make up the mentioned oligopeptides were modeled and then optimized for a structure of lower potential energy and appropriate dihedral angles. In this case, three subsequent geometric optimization processes, based on classical Newtonian theory, the semi-empirical and density functional theory (DFT), explore the energy landscape of each dipeptide during the search of ideal minimum energy structures. Finally, great conformers were described about its electrostatic potential, ionization energy (amino acids), and frontier molecular orbitals and hopping term. From the hopping terms described in this study, it was possible in subsequent studies to characterize the charge transport propertie of these peptides models. It envisioned a new biosensor technology capable of diagnosing amyloid diseases, related to an accumulation of misshapen proteins, based on the conductivity displayed by proteins of the patient. In a second step of this dissertation, a study carried out by quantum molecular modeling of the interaction energy of an antibiotic ribosomal aminoglicosídico on your receiver. It is known that the hygromycin B (hygB) is an aminoglycoside antibiotic that affects ribosomal translocation by direct interaction with the small subunit of the bacterial ribosome (30S), specifically with nucleotides in helix 44 of the 16S ribosomal RNA (16S rRNA). Due to strong electrostatic character of this connection, it was proposed an energetic investigation of the binding mechanism of this complex using different values of dielectric constants (ε = 0, 4, 10, 20 and 40), which have been widely used to study the electrostatic properties of biomolecules. For this, increasing radii centered on the hygB centroid were measured from the 30S-hygB crystal structure (1HNZ.pdb), and only the individual interaction energy of each enclosed nucleotide was determined for quantum calculations using molecular fractionation with conjugate caps (MFCC) strategy. It was noticed that the dielectric constants underestimated the energies of individual interactions, allowing the convergence state is achieved quickly. But only for ε = 40, the total binding energy of drug-receptor interaction is stabilized at r = 18A, which provided an appropriate binding pocket because it encompassed the main residues that interact more strongly with the hygB - C1403, C1404, G1405, A1493, G1494, U1495, U1498 and C1496. Thus, the dielectric constant ≈ 40 is ideal for the treatment of systems with many electrical charges. By comparing the individual binding energies of 16S rRNA nucleotides with the experimental tests that determine the minimum inhibitory concentration (MIC) of hygB, it is believed that those residues with high binding values generated bacterial resistance to the drug when mutated. With the same reasoning, since those with low interaction energy do not influence effectively the affinity of the hygB in its binding site, there is no loss of effectiveness if they were replaced.
Resumo:
The inter-subjectivity is the answer in the search for the solution of complex problems, which concerns interfaces of knowledge, respecting their borders. This paradigm is essential in the author's work. So, the search on screen is based on this perspective, by using inter-subject groups of work conduced by professionals of Computer Science, Social Communication, Architecture and Urbanism, Pedagogy, Psicopegagogy, Nutritional Science, Endocrinology, Occupational Therapy and Nursing, it was also part of this group an 8 year old child, daughter of one of the professional who took part of the group. This thesis aims to present the course of investigation developed, analyzing the action of inter-subject Occupational Therapy and Nutrition on the promotion of learning nutritional concepts through educative-nutritional games in order to prevent child's obesity in an educative context. The research was analytic, interventionist and almost experimental. It took place in a public school in Fortaleza, Ceará, Brazil, between August and December 2004. It was selected a sample non-probabilistic, by convenience, of 200 children, born from 1994 to 1996. It was selected almost nonprobabilistically, by convenience, 200 children born between 1994 and 1996. To analyze the results it was used a triangulation, associated by quantitative and qualitative approaches. The basis collect happened through games specially manufactured to these research- video-games, board games, memory games, puzzles, scramble, searching words and iterative basics. There were semi-structured interviews, direct and structured observations and focus in-groups. It was noticed the efficiency of educativenutritional games in the learning process, which lead to a changing of attitude towards the eating choices. These games gave similar results in relation to the compared variations preferences, experience and attitudes, theses attitudes were observed through the game; and the categories to compare the possibility of learning by playing, the fantasy in the learning process, learning concepts of nutritional education and the need of help in the learning process (mediation). It was proved that educativenutritional games could be used to teach nutritional concepts, in an inter-subjective action of Occupational Therapy and Nutrition in schools. The simultaneous application of these games lead to the optimization of child s learning process. It should be emphasized the need of studies about the adaptation of tools used in a child s Nutritional Education, with the help of inter-subjective action. Because just one subject, in a fractionated way can give an answer to complex problems and help to a change of the reality with effectiveness and resolution
Resumo:
The Combinatorial Optimization is a basic area to companies who look for competitive advantages in the diverse productive sectors and the Assimetric Travelling Salesman Problem, which one classifies as one of the most important problems of this area, for being a problem of the NP-hard class and for possessing diverse practical applications, has increased interest of researchers in the development of metaheuristics each more efficient to assist in its resolution, as it is the case of Memetic Algorithms, which is a evolutionary algorithms that it is used of the genetic operation in combination with a local search procedure. This work explores the technique of Viral Infection in one Memetic Algorithms where the infection substitutes the mutation operator for obtaining a fast evolution or extinguishing of species (KANOH et al, 1996) providing a form of acceleration and improvement of the solution . For this it developed four variants of Viral Infection applied in the Memetic Algorithms for resolution of the Assimetric Travelling Salesman Problem where the agent and the virus pass for a symbiosis process which favored the attainment of a hybrid evolutionary algorithms and computational viable
Resumo:
Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.