995 resultados para Multi-constraints
Resumo:
We present a novel kinetic multi-layer model for gas-particle interactions in aerosols and clouds (KMGAP) that treats explicitly all steps of mass transport and chemical reaction of semi-volatile species partitioning between gas phase, particle surface and particle bulk. KMGAP is based on the PRA model framework (P¨oschl-Rudich- Ammann, 2007), and it includes gas phase diffusion, reversible adsorption, surface reactions, bulk diffusion and reaction, as well as condensation, evaporation and heat transfer. The size change of atmospheric particles and the temporal evolution and spatial profile of the concentration of individual chemical species can be modeled along with gas uptake and accommodation coefficients. Depending on the complexity of the investigated system and the computational constraints, unlimited numbers of semi-volatile species, chemical reactions, and physical processes can be treated, and the model shall help to bridge gaps in the understanding and quantification of multiphase chemistry and microphysics in atmospheric aerosols and clouds. In this study we demonstrate how KM-GAP can be used to analyze, interpret and design experimental investigations of changes in particle size and chemical composition in response to condensation, evaporation, and chemical reaction. For the condensational growth of water droplets, our kinetic model results provide a direct link between laboratory observations and molecular dynamic simulations, confirming that the accommodation coefficient of water at 270K is close to unity (Winkler et al., 2006). Literature data on the evaporation of dioctyl phthalate as a function of particle size and time can be reproduced, and the model results suggest that changes in the experimental conditions like aerosol particle concentration and chamber geometry may influence the evaporation kinetics and can be optimized for efficient probing of specific physical effects and parameters. With regard to oxidative aging of organic aerosol particles, we illustrate how the formation and evaporation of volatile reaction products like nonanal can cause a decrease in the size of oleic acid particles exposed to ozone.
Resumo:
This work presents the application of a multiobjective evolutionary algorithm (MOEA) for optimal power flow (OPF) solution. The OPF is modeled as a constrained nonlinear optimization problem, non-convex of large-scale, with continuous and discrete variables. The violated inequality constraints are treated as objective function of the problem. This strategy allows attending the physical and operational restrictions without compromise the quality of the found solutions. The developed MOEA is based on the theory of Pareto and employs a diversity-preserving mechanism to overcome the premature convergence of algorithm and local optimal solutions. Fuzzy set theory is employed to extract the best compromises of the Pareto set. Results for the IEEE-30, RTS-96 and IEEE-354 test systems are presents to validate the efficiency of proposed model and solution technique.
Resumo:
This paper presents a new approach for solving constraint optimization problems (COP) based on the philosophy of lexicographical goal programming. A two-phase methodology for solving COP using a multi-objective strategy is used. In the first phase, the objective function is completely disregarded and the entire search effort is directed towards finding a single feasible solution. In the second phase, the problem is treated as a bi-objective optimization problem, turning the constraint optimization into a two-objective optimization. The two resulting objectives are the original objective function and the constraint violation degree. In the first phase a methodology based on progressive hardening of soft constraints is proposed in order to find feasible solutions. The performance of the proposed methodology was tested on 11 well-known benchmark functions.
Resumo:
This paper tackles a Nurse Scheduling Problem which consists of generating work schedules for a set of nurses while considering their shift preferences and other requirements. The objective is to maximize the satisfaction of nurses' preferences and minimize the violation of soft constraints. This paper presents a new deterministic heuristic algorithm, called MAPA (multi-assignment problem-based algorithm), which is based on successive resolutions of the assignment problem. The algorithm has two phases: a constructive phase and an improvement phase. The constructive phase builds a full schedule by solving successive assignment problems, one for each day in the planning period. The improvement phase uses a couple of procedures that re-solve assignment problems to produce a better schedule. Given the deterministic nature of this algorithm, the same schedule is obtained each time that the algorithm is applied to the same problem instance. The performance of MAPA is benchmarked against published results for almost 250,000 instances from the NSPLib dataset. In most cases, particularly on large instances of the problem, the results produced by MAPA are better when compared to best-known solutions from the literature. The experiments reported here also show that the MAPA algorithm finds more feasible solutions compared with other algorithms in the literature, which suggest that this proposed approach is effective and robust. © 2013 Springer Science+Business Media New York.
Resumo:
Pós-graduação em Ciências Biológicas (Zoologia) - IBRC
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Optical networks based on passive star couplers and employing wavelength-division multiplexing (WDhf) have been proposed for deployment in local and metropolitan areas. Amplifiers are required in such networks to compensate for the power losses due to splitting and attenuation. However, an optical amplifier has constraints on the maximum gain and the maximum output power it can supply; thus optical amplifier placement becomes a challenging problem. The general problem of minimizing the total amplifier count, subject to the device constraints, is a mixed-integer non-linear problem. Previous studies have attacked the amplifier placement problem by adding the “artificial” constraint that all wavelengths, which are present at a particular point in a fiber, be at the same power level. In this paper, we present a method to solve the minimum amplifier- placement problem while avoiding the equally powered- wavelength constraint. We demonstrate that, by allowing signals to operate at different power levels, our method can reduce the number of amplifiers required in several small to medium-sized networks.
Resumo:
Impact cratering has been a fundamental geological process in Earth history with major ramifications for the biosphere. The complexity of shocked and melted rocks within impact structures presents difficulties for accurate and precise radiogenic isotope age determination, hampering the assessment of the effects of an individual event in the geological record. We demonstrate the utility of a multi-chronometer approach in our study of samples from the 40 km diameter Araguainha impact structure of central Brazil. Samples of uplifted basement granite display abundant evidence of shock deformation, but U/Pb ages of shocked zircons and the Ar-40/Ar-39 ages of feldspar from the granite largely preserve the igneous crystallization and cooling history. Mixed results are obtained from in situ Ar-40/Ar-39 spot analyses of shocked igneous biotites in the granite, with deformation along kink-bands resulting in highly localized, partial resetting in these grains. Likewise, spot analyses of perlitic glass from pseudotachylitic breccia samples reflect a combination of argon inheritance from wall rock material, the age of the glass itself, and post-impact devitrification. The timing of crater formation is better assessed using samples of impact-generated melt rock where isotopic resetting is associated with textural evidence of melting and in situ crystallization. Granular aggregates of neocrystallized zircon form a cluster of ten U-Pb ages that yield a "Concordia" age of 247.8 +/- 3.8 Ma. The possibility of Pb loss from this population suggests that this is a minimum age for the impact event. The best evidence for the age of the impact comes from the U-Th-Pb dating of neocrystallized monazite and Ar-40/Ar-39 step heating of three separate populations of post-impact, inclusion-rich quartz grains that are derived from the infill of miarolitic cavities. The Pb-206/U-238 age of 254.5 +/- 3.2 Ma (2 sigma error) and Pb-208/Th-232 age of 255.2 +/- 4.8 Ma (2 sigma error) of monazite, together with the inverse, 18 point isochron age of 254 +/- 10 Ma (MSWD = 0.52) for the inclusion-rich quartz grains yield a weighted mean age of 254.7 +/- 2.5 Ma (0.99%, 2 sigma error) for the impact event. The age of the Araguainha crater overlaps with the timing of the Permo-Triassic boundary, within error, but the calculated energy released by the Araguainha impact is insufficient to be a direct cause of the global mass extinction. However, the regional effects of the Araguainha impact event in the Parana-Karoo Basin may have been substantial. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.
Resumo:
Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.
Resumo:
An accurate and coherent chronological framework is essential for the interpretation of climatic and environmental records obtained from deep polar ice cores. Until now, one common ice core age scale had been developed based on an inverse dating method (Datice), combining glaciological modelling with absolute and stratigraphic markers between 4 ice cores covering the last 50 ka (thousands of years before present) (Lemieux-Dudon et al., 2010). In this paper, together with the companion paper of Veres et al. (2013), we present an extension of this work back to 800 ka for the NGRIP, TALDICE, EDML, Vostok and EDC ice cores using an improved version of the Datice tool. The AICC2012 (Antarctic Ice Core Chronology 2012) chronology includes numerous new gas and ice stratigraphic links as well as improved evaluation of background and associated variance scenarios. This paper concentrates on the long timescales between 120–800 ka. In this framework, new measurements of δ18Oatm over Marine Isotope Stage (MIS) 11–12 on EDC and a complete δ18Oatm record of the TALDICE ice cores permit us to derive additional orbital gas age constraints. The coherency of the different orbitally deduced ages (from δ18Oatm, δO2/N2 and air content) has been verified before implementation in AICC2012. The new chronology is now independent of other archives and shows only small differences, most of the time within the original uncertainty range calculated by Datice, when compared with the previous ice core reference age scale EDC3, the Dome F chronology, or using a comparison between speleothems and methane. For instance, the largest deviation between AICC2012 and EDC3 (5.4 ka) is obtained around MIS 12. Despite significant modifications of the chronological constraints around MIS 5, now independent of speleothem records in AICC2012, the date of Termination II is very close to the EDC3 one.
Resumo:
An integrated approach for multi-spectral segmentation of MR images is presented. This method is based on the fuzzy c-means (FCM) and includes bias field correction and contextual constraints over spatial intensity distribution and accounts for the non-spherical cluster's shape in the feature space. The bias field is modeled as a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of intensity are added into the FCM cost functions. To reduce the computational complexity, the contextual regularizations are separated from the clustering iterations. Since the feature space is not isotropic, distance measure adopted in Gustafson-Kessel (G-K) algorithm is used instead of the Euclidean distance, to account for the non-spherical shape of the clusters in the feature space. These algorithms are quantitatively evaluated on MR brain images using the similarity measures.