928 resultados para Hybrid simulation-optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with the hybridization of two graph coloring heuristics (Saturation Degree and Largest Degree), and their application within a hyperheuristic for exam timetabling problems. Hyper-heuristics can be seen as algorithms which intelligently select appropriate algorithms/heuristics for solving a problem. We developed a Tabu Search based hyper-heuristic to search for heuristic lists (of graph heuristics) for solving problems and investigated the heuristic lists found by employing knowledge discovery techniques. Two hybrid approaches (involving Saturation Degree and Largest Degree) including one which employs Case Based Reasoning are presented and discussed. Both the Tabu Search based hyper-heuristic and the hybrid approaches are tested on random and real-world exam timetabling problems. Experimental results are comparable with the best state-of-the-art approaches (as measured against established benchmark problems). The results also demonstrate an increased level of generality in our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Among other causes the long-term result of hip prostheses in dogs is determined by aseptic loosening. A prevention of prosthesis complications can be achieved by an optimization of the tribological system which finally results in improved implant duration. In this context a computerized model for the calculation of hip joint loadings during different motions would be of benefit. In a first step in the development of such an inverse dynamic multi-body simulation (MBS-) model we here present the setup of a canine hind limb model applicable for the calculation of ground reaction forces. Methods: The anatomical geometries of the MBS-model have been established using computer tomography- (CT-) and magnetic resonance imaging- (MRI-) data. The CT-data were collected from the pelvis, femora, tibiae and pads of a mixed-breed adult dog. Geometric information about 22 muscles of the pelvic extremity of 4 mixed-breed adult dogs was determined using MRI. Kinematic and kinetic data obtained by motion analysis of a clinically healthy dog during a gait cycle (1 m/s) on an instrumented treadmill were used to drive the model in the multi-body simulation. Results and Discussion: As a result the vertical ground reaction forces (z-direction) calculated by the MBS-system show a maximum deviation of 1.75%BW for the left and 4.65%BW for the right hind limb from the treadmill measurements. The calculated peak ground reaction forces in z- and y-direction were found to be comparable to the treadmill measurements, whereas the curve characteristics of the forces in y-direction were not in complete alignment. Conclusion: In conclusion, it could be demonstrated that the developed MBS-model is suitable for simulating ground reaction forces of dogs during walking. In forthcoming investigations the model will be developed further for the calculation of forces and moments acting on the hip joint during different movements, which can be of help in context with the in silico development and testing of hip prostheses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade, success of social networks has significantly reshaped how people consume information. Recommendation of contents based on user profiles is well-received. However, as users become dominantly mobile, little is done to consider the impacts of the wireless environment, especially the capacity constraints and changing channel. In this dissertation, we investigate a centralized wireless content delivery system, aiming to optimize overall user experience given the capacity constraints of the wireless networks, by deciding what contents to deliver, when and how. We propose a scheduling framework that incorporates content-based reward and deliverability. Our approach utilizes the broadcast nature of wireless communication and social nature of content, by multicasting and precaching. Results indicate this novel joint optimization approach outperforms existing layered systems that separate recommendation and delivery, especially when the wireless network is operating at maximum capacity. Utilizing limited number of transmission modes, we significantly reduce the complexity of the optimization. We also introduce the design of a hybrid system to handle transmissions for both system recommended contents ('push') and active user requests ('pull'). Further, we extend the joint optimization framework to the wireless infrastructure with multiple base stations. The problem becomes much harder in that there are many more system configurations, including but not limited to power allocation and how resources are shared among the base stations ('out-of-band' in which base stations transmit with dedicated spectrum resources, thus no interference; and 'in-band' in which they share the spectrum and need to mitigate interference). We propose a scalable two-phase scheduling framework: 1) each base station obtains delivery decisions and resource allocation individually; 2) the system consolidates the decisions and allocations, reducing redundant transmissions. Additionally, if the social network applications could provide the predictions of how the social contents disseminate, the wireless networks could schedule the transmissions accordingly and significantly improve the dissemination performance by reducing the delivery delay. We propose a novel method utilizing: 1) hybrid systems to handle active disseminating requests; and 2) predictions of dissemination dynamics from the social network applications. This method could mitigate the performance degradation for content dissemination due to wireless delivery delay. Results indicate that our proposed system design is both efficient and easy to implement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this research work, a new routing protocol for Opportunistic Networks is presented. The proposed protocol is called PSONET (PSO for Opportunistic Networks) since the proposal uses a hybrid system composed of a Particle Swarm Optimization algorithm (PSO). The main motivation for using the PSO is to take advantage of its search based on individuals and their learning adaptation. The PSONET uses the Particle Swarm Optimization technique to drive the network traffic through of a good subset of forwarders messages. The PSONET analyzes network communication conditions, detecting whether each node has sparse or dense connections and thus make better decisions about routing messages. The PSONET protocol is compared with the Epidemic and PROPHET protocols in three different scenarios of mobility: a mobility model based in activities, which simulates the everyday life of people in their work activities, leisure and rest; a mobility model based on a community of people, which simulates a group of people in their communities, which eventually will contact other people who may or may not be part of your community, to exchange information; and a random mobility pattern, which simulates a scenario divided into communities where people choose a destination at random, and based on the restriction map, move to this destination using the shortest path. The simulation results, obtained through The ONE simulator, show that in scenarios where the mobility model based on a community of people and also where the mobility model is random, the PSONET protocol achieves a higher messages delivery rate and a lower replication messages compared with the Epidemic and PROPHET protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents how new paradigms and methodologies for software development are changing rapidly in the last two years. In the current scenario where we live on, occurs a transition that, although slight, reflects the rapid manner in which the software production paradigms are reinvented due to the change of display devices and interaction with the end user. Studies indicate that in 2013 was the turn out of the internet access domain for mobile devices over the traditional desktop device, which is currently at around 60% mobile, against 40% desktop. This field will tend to grow in the coming years and it is expected that the use of internet for a desktop terminal tends to be less each day (comScore). In this context, the software industry has been re-invented and updated with respect to technologies that promote software and mobile applications, building products capable of responding to the user market. The development of software products, such as applications, must be put into production for different user environments, such as Web, iOS and Android in a way to enhance efficiency, optimization and productivity in the software development cycle (Langer, Arthur M.).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Pierre Auger Observatory is a detector for ultra-high energy cosmic rays. It consists of a surface array to measure secondary particles at ground level and a fluorescence detector to measure the development of air showers in the atmosphere above the array. The "hybrid" detection mode combines the information from the two subsystems. We describe the determination of the hybrid exposure for events observed by the fluorescence telescopes in coincidence with at least one water-Cherenkov detector of the surface array. A detailed knowledge of the time dependence of the detection operations is crucial for an accurate evaluation of the exposure. We discuss the relevance of monitoring data collected during operations, such as the status of the fluorescence detector, background light and atmospheric conditions, that are used in both simulation and reconstruction. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electric vehicle (EV) batteries tend to have accelerated degradation due to high peak power and harsh charging/discharging cycles during acceleration and deceleration periods, particularly in urban driving conditions. An oversized energy storage system (ESS) can meet the high power demands; however, it suffers from increased size, volume and cost. In order to reduce the overall ESS size and extend battery cycle life, a battery-ultracapacitor (UC) hybrid energy storage system (HESS) has been considered as an alternative solution. In this work, we investigate the optimized configuration, design, and energy management of a battery-UC HESS. One of the major challenges in a HESS is to design an energy management controller for real-time implementation that can yield good power split performance. We present the methodologies and solutions to this problem in a battery-UC HESS with a DC-DC converter interfacing with the UC and the battery. In particular, a multi-objective optimization problem is formulated to optimize the power split in order to prolong the battery lifetime and to reduce the HESS power losses. This optimization problem is numerically solved for standard drive cycle datasets using Dynamic Programming (DP). Trained using the DP optimal results, an effective real-time implementation of the optimal power split is realized based on Neural Network (NN). This proposed online energy management controller is applied to a midsize EV model with a 360V/34kWh battery pack and a 270V/203Wh UC pack. The proposed online energy management controller effectively splits the load demand with high power efficiency and also effectively reduces the battery peak current. More importantly, a 38V-385Wh battery and a 16V-2.06Wh UC HESS hardware prototype and a real-time experiment platform has been developed. The real-time experiment results have successfully validated the real-time implementation feasibility and effectiveness of the real-time controller design for the battery-UC HESS. A battery State-of-Health (SoH) estimation model is developed as a performance metric to evaluate the battery cycle life extension effect. It is estimated that the proposed online energy management controller can extend the battery cycle life by over 60%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

International audience

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The constant need to improve helicopter performance requires the optimization of existing and future rotor designs. A crucial indicator of rotor capability is hover performance, which depends on the near-body flow as well as the structure and strength of the tip vortices formed at the trailing edge of the blades. Computational Fluid Dynamics (CFD) solvers must balance computational expenses with preservation of the flow, and to limit computational expenses the mesh is often coarsened in the outer regions of the computational domain. This can lead to degradation of the vortex structures which compose the rotor wake. The current work conducts three-dimensional simulations using OVERTURNS, a three-dimensional structured grid solver that models the flow field using the Reynolds-Averaged Navier-Stokes equations. The S-76 rotor in hover was chosen as the test case for evaluating the OVERTURNS solver, focusing on methods to better preserve the rotor wake. Using the hover condition, various computational domains, spatial schemes, and boundary conditions were tested. Furthermore, a mesh adaption routine was implemented, allowing for the increased refinement of the mesh in areas of turbulent flow without the need to add points to the mesh. The adapted mesh was employed to conduct a sweep of collective pitch angles, comparing the resolved wake and integrated forces to existing computational and experimental results. The integrated thrust values saw very close agreement across all tested pitch angles, while the power was slightly over predicted, resulting in under prediction of the Figure of Merit. Meanwhile, the tip vortices have been preserved for multiple blade passages, indicating an improvement in vortex preservation when compared with previous work. Finally, further results from a single collective pitch case were presented to provide a more complete picture of the solver results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Part 1 of this thesis, we propose that biochemical cooperativity is a fundamentally non-ideal process. We show quantal effects underlying biochemical cooperativity and highlight apparent ergodic breaking at small volumes. The apparent ergodic breaking manifests itself in a divergence of deterministic and stochastic models. We further predict that this divergence of deterministic and stochastic results is a failure of the deterministic methods rather than an issue of stochastic simulations.

Ergodic breaking at small volumes may allow these molecular complexes to function as switches to a greater degree than has previously been shown. We propose that this ergodic breaking is a phenomenon that the synapse might exploit to differentiate Ca$^{2+}$ signaling that would lead to either the strengthening or weakening of a synapse. Techniques such as lattice-based statistics and rule-based modeling are tools that allow us to directly confront this non-ideality. A natural next step to understanding the chemical physics that underlies these processes is to consider \textit{in silico} specifically atomistic simulation methods that might augment our modeling efforts.

In the second part of this thesis, we use evolutionary algorithms to optimize \textit{in silico} methods that might be used to describe biochemical processes at the subcellular and molecular levels. While we have applied evolutionary algorithms to several methods, this thesis will focus on the optimization of charge equilibration methods. Accurate charges are essential to understanding the electrostatic interactions that are involved in ligand binding, as frequently discussed in the first part of this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A primary goal of this dissertation is to understand the links between mathematical models that describe crystal surfaces at three fundamental length scales: The scale of individual atoms, the scale of collections of atoms forming crystal defects, and macroscopic scale. Characterizing connections between different classes of models is a critical task for gaining insight into the physics they describe, a long-standing objective in applied analysis, and also highly relevant in engineering applications. The key concept I use in each problem addressed in this thesis is coarse graining, which is a strategy for connecting fine representations or models with coarser representations. Often this idea is invoked to reduce a large discrete system to an appropriate continuum description, e.g. individual particles are represented by a continuous density. While there is no general theory of coarse graining, one closely related mathematical approach is asymptotic analysis, i.e. the description of limiting behavior as some parameter becomes very large or very small. In the case of crystalline solids, it is natural to consider cases where the number of particles is large or where the lattice spacing is small. Limits such as these often make explicit the nature of links between models capturing different scales, and, once established, provide a means of improving our understanding, or the models themselves. Finding appropriate variables whose limits illustrate the important connections between models is no easy task, however. This is one area where computer simulation is extremely helpful, as it allows us to see the results of complex dynamics and gather clues regarding the roles of different physical quantities. On the other hand, connections between models enable the development of novel multiscale computational schemes, so understanding can assist computation and vice versa. Some of these ideas are demonstrated in this thesis. The important outcomes of this thesis include: (1) a systematic derivation of the step-flow model of Burton, Cabrera, and Frank, with corrections, from an atomistic solid-on-solid-type models in 1+1 dimensions; (2) the inclusion of an atomistically motivated transport mechanism in an island dynamics model allowing for a more detailed account of mound evolution; and (3) the development of a hybrid discrete-continuum scheme for simulating the relaxation of a faceted crystal mound. Central to all of these modeling and simulation efforts is the presence of steps composed of individual layers of atoms on vicinal crystal surfaces. Consequently, a recurring theme in this research is the observation that mesoscale defects play a crucial role in crystal morphological evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The overarching theme of this thesis is mesoscale optical and optoelectronic design of photovoltaic and photoelectrochemical devices. In a photovoltaic device, light absorption and charge carrier transport are coupled together on the mesoscale, and in a photoelectrochemical device, light absorption, charge carrier transport, catalysis, and solution species transport are all coupled together on the mesoscale. The work discussed herein demonstrates that simulation-based mesoscale optical and optoelectronic modeling can lead to detailed understanding of the operation and performance of these complex mesostructured devices, serve as a powerful tool for device optimization, and efficiently guide device design and experimental fabrication efforts. In-depth studies of two mesoscale wire-based device designs illustrate these principles—(i) an optoelectronic study of a tandem Si|WO3 microwire photoelectrochemical device, and (ii) an optical study of III-V nanowire arrays.

The study of the monolithic, tandem, Si|WO3 microwire photoelectrochemical device begins with development and validation of an optoelectronic model with experiment. This study capitalizes on synergy between experiment and simulation to demonstrate the model’s predictive power for extractable device voltage and light-limited current density. The developed model is then used to understand the limiting factors of the device and optimize its optoelectronic performance. The results of this work reveal that high fidelity modeling can facilitate unequivocal identification of limiting phenomena, such as parasitic absorption via excitation of a surface plasmon-polariton mode, and quick design optimization, achieving over a 300% enhancement in optoelectronic performance over a nominal design for this device architecture, which would be time-consuming and challenging to do via experiment.

The work on III-V nanowire arrays also starts as a collaboration of experiment and simulation aimed at gaining understanding of unprecedented, experimentally observed absorption enhancements in sparse arrays of vertically-oriented GaAs nanowires. To explain this resonant absorption in periodic arrays of high index semiconductor nanowires, a unified framework that combines a leaky waveguide theory perspective and that of photonic crystals supporting Bloch modes is developed in the context of silicon, using both analytic theory and electromagnetic simulations. This detailed theoretical understanding is then applied to a simulation-based optimization of light absorption in sparse arrays of GaAs nanowires. Near-unity absorption in sparse, 5% fill fraction arrays is demonstrated via tapering of nanowires and multiple wire radii in a single array. Finally, experimental efforts are presented towards fabrication of the optimized array geometries. A hybrid self-catalyzed and selective area MOCVD growth method is used to establish morphology control of GaP nanowire arrays. Similarly, morphology and pattern control of nanowires is demonstrated with ICP-RIE of InP. Optical characterization of the InP nanowire arrays gives proof of principle that tapering and multiple wire radii can lead to near-unity absorption in sparse arrays of InP nanowires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES AND STUDY METHOD: There are two subjects in this thesis: “Lot production size for a parallel machine scheduling problem with auxiliary equipment” and “Bus holding for a simulated traffic network”. Although these two themes seem unrelated, the main idea is the optimization of complex systems. The “Lot production size for a parallel machine scheduling problem with auxiliary equipment” deals with a manufacturing setting where sets of pieces form finished products. The aim is to maximize the profit of the finished products. Each piece may be processed in more than one mold. Molds must be mounted on machines with their corresponding installation setup times. The key point of our methodology is to solve the single period lot-sizing decisions for the finished products together with the piece-mold and the mold-machine assignments, relaxing the constraint that a single mold may not be used in two machines at the same time. For the “Bus holding for a simulated traffic network” we deal with One of the most annoying problems in urban bus operations is bus bunching, which happens when two or more buses arrive at a stop nose to tail. Bus bunching reflects an unreliable service that affects transit operations by increasing passenger-waiting times. This work proposes a linear mathematical programming model that establishes bus holding times at certain stops along a transit corridor to avoid bus bunching. Our approach needs real-time input, so we simulate a transit corridor and apply our mathematical model to the data generated. Thus, the inherent variability of a transit system is considered by the simulation, while the optimization model takes into account the key variables and constraints of the bus operation. CONTRIBUTIONS AND CONCLUSIONS: For the “Lot production size for a parallel machine scheduling problem with auxiliary equipment” the relaxation we propose able to find solutions more efficiently, moreover our experimental results show that most of the solutions verify that molds are non-overlapping even if they are installed on several machines. We propose an exact integer linear programming, a Relax&Fix heuristic, and a multistart greedy algorithm to solve this problem. Experimental results on instances based on real-world data show the efficiency of our approaches. The mathematical model and the algorithm for the lot production size problem, showed in this research, can be used for production planners to help in the scheduling of the manufacturing. For the “Bus holding for a simulated traffic network” most of the literature considers quadratic models that minimize passenger-waiting times, but they are harder to solve and therefore difficult to operate by real-time systems. On the other hand, our methodology reduces passenger-waiting times efficiently given our linear programming model, with the characteristic of applying control intervals just every 5 minutes.