942 resultados para Multi-objective optimization techniques
Resumo:
Cette thèse est une contribution à la modélisation, la planification et l’optimisation du transport pour l’approvisionnement en bois de forêt des industries de première transformation. Dans ce domaine, les aléas climatiques (mise au sol des bois par les tempêtes), sanitaires (attaques bactériologiques et fongiques des bois) et commerciaux (variabilité et exigence croissante des marchés) poussent les divers acteurs du secteur (entrepreneurs et exploitants forestiers, transporteurs) à revoir l’organisation de la filière logistique d’approvisionnement, afin d’améliorer la qualité de service (adéquation offre-demande) et de diminuer les coûts. L’objectif principal de cette thèse était de proposer un modèle de pilotage améliorant la performance du transport forestier, en respectant les contraintes et les pratiques du secteur. Les résultats établissent une démarche de planification hiérarchique des activités de transport à deux niveaux de décision, tactique et opérationnel. Au niveau tactique, une optimisation multi-périodes permet de répondre aux commandes en minimisant l’activité globale de transport, sous contrainte de capacité agrégée des moyens de transport accessibles. Ce niveau permet de mettre en œuvre des politiques de lissage de charge et d’organisation de sous-traitance ou de partenariats entre acteurs de transport. Au niveau opérationnel, les plans tactiques alloués à chaque transporteur sont désagrégés, pour permettre une optimisation des tournées des flottes, sous contrainte des capacités physiques de ces flottes. Les modèles d’optimisation de chaque niveau sont formalisés en programmation linéaire mixte avec variables binaires. L’applicabilité des modèles a été testée en utilisant un jeu de données industrielles en région Aquitaine et a montré des améliorations significatives d’exploitation des capacités de transport par rapport aux pratiques actuelles. Les modèles de décision ont été conçus pour s’adapter à tout contexte organisationnel, partenarial ou non : la production du plan tactique possède un caractère générique sans présomption de l’organisation, celle-ci étant prise en compte, dans un deuxième temps, au niveau de l’optimisation opérationnelle du plan de transport de chaque acteur.
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
The role of computer modeling has grown recently to integrate itself as an inseparable tool to experimental studies for the optimization of automotive engines and the development of future fuels. Traditionally, computer models rely on simplified global reaction steps to simulate the combustion and pollutant formation inside the internal combustion engine. With the current interest in advanced combustion modes and injection strategies, this approach depends on arbitrary adjustment of model parameters that could reduce credibility of the predictions. The purpose of this study is to enhance the combustion model of KIVA, a computational fluid dynamics code, by coupling its fluid mechanics solution with detailed kinetic reactions solved by the chemistry solver, CHEMKIN. As a result, an engine-friendly reaction mechanism for n-heptane was selected to simulate diesel oxidation. Each cell in the computational domain is considered as a perfectly-stirred reactor which undergoes adiabatic constant- volume combustion. The model was applied to an ideally-prepared homogeneous- charge compression-ignition combustion (HCCI) and direct injection (DI) diesel combustion. Ignition and combustion results show that the code successfully simulates the premixed HCCI scenario when compared to traditional combustion models. Direct injection cases, on the other hand, do not offer a reliable prediction mainly due to the lack of turbulent-mixing model, inherent in the perfectly-stirred reactor formulation. In addition, the model is sensitive to intake conditions and experimental uncertainties which require implementation of enhanced predictive tools. It is recommended that future improvements consider turbulent-mixing effects as well as optimization techniques to accurately simulate actual in-cylinder process with reduced computational cost. Furthermore, the model requires the extension of existing fuel oxidation mechanisms to include pollutant formation kinetics for emission control studies.
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Civil e Ambiental, 2015.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, Programa de Pós-Graducação em Informática, 2016.
Resumo:
New generation embedded systems demand high performance, efficiency and flexibility. Reconfigurable hardware can provide all these features. However the costly reconfiguration process and the lack of management support have prevented a broader use of these resources. To solve these issues we have developed a scheduler that deals with task-graphs at run-time, steering its execution in the reconfigurable resources while carrying out both prefetch and replacement techniques that cooperate to hide most of the reconfiguration delays. In our scheduling environment task-graphs are analyzed at design-time to extract useful information. This information is used at run-time to obtain near-optimal schedules, escaping from local-optimum decisions, while only carrying out simple computations. Moreover, we have developed a hardware implementation of the scheduler that applies all the optimization techniques while introducing a delay of only a few clock cycles. In the experiments our scheduler clearly outperforms conventional run-time schedulers based on As-Soon-As-Possible techniques. In addition, our replacement policy, specially designed for reconfigurable systems, achieves almost optimal results both regarding reuse and performance.
Resumo:
Since policy-makers usually pursue several conflicting objectives, policy-making can be understood as a multicriteria decision problem. Following the methodological proposal by André and Cardenete (2005) André, F. J. and Cardenete, M. A. 2005. Multicriteria Policy Making. Defining Efficient Policies in a General Equilibrium Model, Seville: Centro de Estudios Andaluces. Working Paper No. E2005/04, multi-objective programming is used in connection with a computable general equilibrium model to represent optimal policy-making and to obtain so-called efficient policies in an application to a regional economy (Andalusia, Spain). This approach is applied to the design of subsidy policies under two different scenarios. In the first scenario, it is assumed that the government is concerned just about two objectives: ensuring the profitability of a key strategic sector and increasing overall output. Finally, the scope of the exercise is enlarged by solving a problem with seven policy objectives, including both general and sectorial objectives. It is concluded that the observed policy could have been Pareto-improved in several directions.
Resumo:
The optimal capacities and locations of a sequence of landfills are studied, and the interactions between these characteristics are considered. Deciding the capacity of a landfill has some spatial implications since it affects the feasible region for the remaining landfills, and some temporal implications because the capacity determines the lifetime of the landfill and hence the moment of time when the next landfills should be constructed. Some general mathematical properties of the solution are provided and interpreted from an economic point of view. The resulting problem turns out to be non-convex and, therefore, it cannot be solved by conventional optimization techniques. Some global optimization methods are used to solve the problem in a particular case in order to illustrate how the solution depends on the parameter values.
Resumo:
Over 2 million Anterior Cruciate Ligament (ACL) injuries occur annually worldwide resulting in considerable economic and health burdens (e.g., suffering, surgery, loss of function, risk for re-injury, and osteoarthritis). Current screening methods are effective but they generally rely on expensive and time-consuming biomechanical movement analysis, and thus are impractical solutions. In this dissertation, I report on a series of studies that begins to investigate one potentially efficient alternative to biomechanical screening, namely skilled observational risk assessment (e.g., having experts estimate risk based on observations of athletes movements). Specifically, in Study 1 I discovered that ACL injury risk can be accurately and reliably estimated with nearly instantaneous visual inspection when observed by skilled and knowledgeable professionals. Modern psychometric optimization techniques were then used to develop a robust and efficient 5-item test of ACL injury risk prediction skill—i.e., the ACL Injury-Risk-Estimation Quiz or ACL-IQ. Study 2 cross-validated the results from Study 1 in a larger representative sample of both skilled (Exercise Science/Sports Medicine) and un-skilled (General Population) groups. In accord with research on human expertise, quantitative structural and process modeling of risk estimation indicated that superior performance was largely mediated by specific strategies and skills (e.g., ignoring irrelevant information), independent of domain general cognitive abilities (e.g., metal rotation, general decision skill). These cognitive models suggest that ACL-IQ is a trainable skill, providing a foundation for future research and applications in training, decision support, and ultimately clinical screening investigations. Overall, I present the first evidence that observational ACL injury risk prediction is possible including a robust technology for fast, accurate and reliable measurement—i.e., the ACL-IQ. Discussion focuses on applications and outreach including a web platform that was developed to house the test, provide a repository for further data collection, and increase public and professional awareness and outreach (www.ACL-IQ.org). Future directions and general applications of the skilled movement analysis approach are also discussed.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
The elemental analysis of soil is useful in forensic and environmental sciences. Methods were developed and optimized for two laser-based multi-element analysis techniques: laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and laser-induced breakdown spectroscopy (LIBS). This work represents the first use of a 266 nm laser for forensic soil analysis by LIBS. Sample preparation methods were developed and optimized for a variety of sample types, including pellets for large bulk soil specimens (470 mg) and sediment-laden filters (47 mg), and tape-mounting for small transfer evidence specimens (10 mg). Analytical performance for sediment filter pellets and tape-mounted soils was similar to that achieved with bulk pellets. An inter-laboratory comparison exercise was designed to evaluate the performance of the LA-ICP-MS and LIBS methods, as well as for micro X-ray fluorescence (μXRF), across multiple laboratories. Limits of detection (LODs) were 0.01-23 ppm for LA-ICP-MS, 0.25-574 ppm for LIBS, 16-4400 ppm for µXRF, and well below the levels normally seen in soils. Good intra-laboratory precision (≤ 6 % relative standard deviation (RSD) for LA-ICP-MS; ≤ 8 % for µXRF; ≤ 17 % for LIBS) and inter-laboratory precision (≤ 19 % for LA-ICP-MS; ≤ 25 % for µXRF) were achieved for most elements, which is encouraging for a first inter-laboratory exercise. While LIBS generally has higher LODs and RSDs than LA-ICP-MS, both were capable of generating good quality multi-element data sufficient for discrimination purposes. Multivariate methods using principal components analysis (PCA) and linear discriminant analysis (LDA) were developed for discriminations of soils from different sources. Specimens from different sites that were indistinguishable by color alone were discriminated by elemental analysis. Correct classification rates of 94.5 % or better were achieved in a simulated forensic discrimination of three similar sites for both LIBS and LA-ICP-MS. Results for tape-mounted specimens were nearly identical to those achieved with pellets. Methods were tested on soils from USA, Canada and Tanzania. Within-site heterogeneity was site-specific. Elemental differences were greatest for specimens separated by large distances, even within the same lithology. Elemental profiles can be used to discriminate soils from different locations and narrow down locations even when mineralogy is similar.
Resumo:
Recent developments have made researchers to reconsider Lagrangian measurement techniques as an alternative to their Eulerian counterpart when investigating non-stationary flows. This thesis advances the state-of-the-art of Lagrangian measurement techniques by pursuing three different objectives: (i) developing new Lagrangian measurement techniques for difficult-to-measure, in situ flow environments; (ii) developing new post-processing strategies designed for unstructured Lagrangian data, as well as providing guidelines towards their use; and (iii) presenting the advantages that the Lagrangian framework has over their Eulerian counterpart in various non-stationary flow problems. Towards the first objective, a large-scale particle tracking velocimetry apparatus is designed for atmospheric surface layer measurements. Towards the second objective, two techniques, one for identifying Lagrangian Coherent Structures (LCS) and the other for characterizing entrainment directly from unstructured Lagrangian data, are developed. Finally, towards the third objective, the advantages of Lagrangian-based measurements are showcased in two unsteady flow problems: the atmospheric surface layer, and entrainment in a non-stationary turbulent flow. Through developing new experimental and post-processing strategies for Lagrangian data, and through showcasing the advantages of Lagrangian data in various non-stationary flows, the thesis works to help investigators to more easily adopt Lagrangian-based measurement techniques.
Resumo:
In this paper a solution to an highly constrained and non-convex economical dispatch (ED) problem with a meta-heuristic technique named Sensing Cloud Optimization (SCO) is presented. The proposed meta-heuristic is based on a cloud of particles whose central point represents the objective function value and the remaining particles act as sensors "to fill" the search space and "guide" the central particle so it moves into the best direction. To demonstrate its performance, a case study with multi-fuel units and valve- point effects is presented.
Resumo:
Random amplified polymorphic DNA (RAPD) technique is a simple and reliable method to detect DNA polymorphism. Several factors can affect the amplification profiles, thereby causing false bands and non-reproducibility of assay. In this study, we analyzed the effect of changing the concentration of primer, magnesium chloride, template DNA and Taq DNA polymerase with the objective of determining their optimum concentration for the standardization of RAPD technique for genetic studies of Cuban Triatominae. Reproducible amplification patterns were obtained using 5 pmoL of primer, 2.5 mM of MgCl2, 25 ng of template DNA and 2 U of Taq DNA polymerase in 25 µL of the reaction. A panel of five random primers was used to evaluate the genetic variability of T. flavida. Three of these (OPA-1, OPA-2 and OPA-4) generated reproducible and distinguishable fingerprinting patterns of Triatominae. Numerical analysis of 52 RAPD amplified bands generated for all five primers was carried out with unweighted pair group method analysis (UPGMA). Jaccard's Similarity Coefficient data were used to construct a dendrogram. Two groups could be distinguished by RAPD data and these groups coincided with geographic origin, i.e. the populations captured in areas from east and west of Guanahacabibes, Pinar del Río. T. flavida present low interpopulation variability that could result in greater susceptibility to pesticides in control programs. The RAPD protocol and the selected primers are useful for molecular characterization of Cuban Triatominae.