918 resultados para Multi-phase experiments
Resumo:
The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.
Resumo:
The study of quantum degenerate gases has many applications in topics such as condensed matter dynamics, precision measurements and quantum phase transitions. We built an apparatus to create 87Rb Bose-Einstein condensates (BECs) and generated, via optical and magnetic interactions, novel quantum systems in which we studied the contained phase transitions. For our first experiment we quenched multi-spin component BECs from a miscible to dynamically unstable immiscible state. The transition rapidly drives any spin fluctuations with a coherent growth process driving the formation of numerous spin polarized domains. At much longer times these domains coarsen as the system approaches equilibrium. For our second experiment we explored the magnetic phases present in a spin-1 spin-orbit coupled BEC and the contained quantum phase transitions. We observed ferromagnetic and unpolarized phases which are stabilized by the spin-orbit coupling’s explicit locking between spin and motion. These two phases are separated by a critical curve containing both first-order and second-order transitions joined at a critical point. The narrow first-order transition gives rise to long-lived metastable states. For our third experiment we prepared independent BECs in a double-well potential, with an artificial magnetic field between the BECs. We transitioned to a single BEC by lowering the barrier while expanding the region of artificial field to cover the resulting single BEC. We compared the vortex distribution nucleated via conventional dynamics to those produced by our procedure, showing our dynamical process populates vortices much more rapidly and in larger number than conventional nucleation.
Resumo:
The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.
Decoherence models for discrete-time quantum walks and their application to neutral atom experiments
Resumo:
We discuss decoherence in discrete-time quantum walks in terms of a phenomenological model that distinguishes spin and spatial decoherence. We identify the dominating mechanisms that affect quantum-walk experiments realized with neutral atoms walking in an optical lattice. From the measured spatial distributions, we determine with good precision the amount of decoherence per step, which provides a quantitative indication of the quality of our quantum walks. In particular, we find that spin decoherence is the main mechanism responsible for the loss of coherence in our experiment. We also find that the sole observation of ballistic-instead of diffusive-expansion in position space is not a good indicator of the range of coherent delocalization. We provide further physical insight by distinguishing the effects of short- and long-time spin dephasing mechanisms. We introduce the concept of coherence length in the discrete-time quantum walk, which quantifies the range of spatial coherences. Unexpectedly, we find that quasi-stationary dephasing does not modify the local properties of the quantum walk, but instead affects spatial coherences. For a visual representation of decoherence phenomena in phase space, we have developed a formalism based on a discrete analogue of the Wigner function. We show that the effects of spin and spatial decoherence differ dramatically in momentum space.
Resumo:
Network Virtualization is a key technology for the Future Internet, allowing the deployment of multiple independent virtual networks that use resources of the same basic infrastructure. An important challenge in the dynamic provision of virtual networks resides in the optimal allocation of physical resources (nodes and links) to requirements of virtual networks. This problem is known as Virtual Network Embedding (VNE). For the resolution of this problem, previous research has focused on designing algorithms based on the optimization of a single objective. On the contrary, in this work we present a multi-objective algorithm, called VNE-MO-ILP, for solving dynamic VNE problem, which calculates an approximation of the Pareto Front considering simultaneously resource utilization and load balancing. Experimental results show evidences that the proposed algorithm is better or at least comparable to a state-of-the-art algorithm. Two performance metrics were simultaneously evaluated: (i) Virtual Network Request Acceptance Ratio and (ii) Revenue/Cost Relation. The size of test networks used in the experiments shows that the proposed algorithm scales well in execution times, for networks of 84 nodes
Resumo:
Males often use scent to communicate their domi- nance, and to mediate aggressive and breeding behaviors. In teleost fish, however, the chemical composition of male pher- omones is poorly understood. Male Mozambique tilapia, Oreochromis mossambicus, use urine that signals social status and primes females to spawn. The urinary sex pheromone di- rected at females consists of 5β-pregnane-3α,17α,20β-triol 3- glucuronate and its 20α-epimer. The concentration of these is positively correlated with male social rank. This study tested whether dominant male urine reduces aggression in receiver males, and whether the pregnanetriol 3-glucuronates also re- duce male-male aggression. Males were allowed to fight their mirror image when exposed to either: i) water control or a chemical stimulus; ii) dominant male urine (DMU); iii) C18- solid phase (C18-SPE) DMU eluate; iv) C18-SPE DMU eluate plus filtrate; v) the two pregnanetriol 3-glucuronates (P3Gs); or vi) P3Gs plus DMU filtrate. Control males mounted an increas- ingly aggressive fight against their image over time. However, DMU significantly reduced this aggressive response. The two urinary P3Gs did not replicate the effect of whole DMU. Neither did the C18-SPE DMU eluate, containing the P3Gs, alone, nor the C18-SPE DMU filtrate to which the two P3Gs were added. Only exposure to reconstituted DMU (C18-SPE eluate plus filtrate) restored the aggression-reducing effect of whole DMU. Olfactory activity was present in the eluate and the polar filtrate in electro-olfactogram studies. We conclude that P3Gs alone have no reducing effect on aggression and that the urinary signal driving off male competition is likely to be a multi-component pheromone, with components present in both the polar and non-polar urine fractions.
Resumo:
Methanol is an important and versatile compound with various uses as a fuel and a feedstock chemical. Methanol is also a potential chemical energy carrier. Due to the fluctuating nature of renewable energy sources such as wind or solar, storage of energy is required to balance the varying supply and demand. Excess electrical energy generated at peak periods can be stored by using the energy in the production of chemical compounds. The conventional industrial production of methanol is based on the gas-phase synthesis from synthesis gas generated from fossil sources, primarily natural gas. Methanol can also be produced by hydrogenation of CO2. The production of methanol from CO2 captured from emission sources or even directly from the atmosphere would allow sustainable production based on a nearly limitless carbon source, while helping to reduce the increasing CO2 concentration in the atmosphere. Hydrogen for synthesis can be produced by electrolysis of water utilizing renewable electricity. A new liquid-phase methanol synthesis process has been proposed. In this process, a conventional methanol synthesis catalyst is mixed in suspension with a liquid alcohol solvent. The alcohol acts as a catalytic solvent by enabling a new reaction route, potentially allowing the synthesis of methanol at lower temperatures and pressures compared to conventional processes. For this thesis, the alcohol promoted liquid phase methanol synthesis process was tested at laboratory scale. Batch and semibatch reaction experiments were performed in an autoclave reactor, using a conventional Cu/ZnO catalyst and ethanol and 2-butanol as the alcoholic solvents. Experiments were performed at the pressure range of 30-60 bar and at temperatures of 160-200 °C. The productivity of methanol was found to increase with increasing pressure and temperature. In the studied process conditions a maximum volumetric productivity of 1.9 g of methanol per liter of solvent per hour was obtained, while the maximum catalyst specific productivity was found to be 40.2 g of methanol per kg of catalyst per hour. The productivity values are low compared to both industrial synthesis and to gas-phase synthesis from CO2. However, the reaction temperatures and pressures employed were lower compared to gas-phase processes. While the productivity is not high enough for large-scale industrial operation, the milder reaction conditions and simple operation could prove useful for small-scale operations. Finally, a preliminary design for an alcohol promoted, liquid-phase methanol synthesis process was created using the data obtained from the experiments. The demonstration scale process was scaled to an electrolyzer unit producing 1 Nm3 of hydrogen per hour. This Master’s thesis is closely connected to LUT REFLEX-platform.
Resumo:
Intelligent agents offer a new and exciting way of understanding the world of work. Agent-Based Simulation (ABS), one way of using intelligent agents, carries great potential for progressing our understanding of management practices and how they link to retail performance. We have developed simulation models based on research by a multi-disciplinary team of economists, work psychologists and computer scientists. We will discuss our experiences of implementing these concepts working with a well-known retail department store. There is no doubt that management practices are linked to the performance of an organisation (Reynolds et al., 2005; Wall & Wood, 2005). Best practices have been developed, but when it comes down to the actual application of these guidelines considerable ambiguity remains regarding their effectiveness within particular contexts (Siebers et al., forthcoming a). Most Operational Research (OR) methods can only be used as analysis tools once management practices have been implemented. Often they are not very useful for giving answers to speculative ‘what-if’ questions, particularly when one is interested in the development of the system over time rather than just the state of the system at a certain point in time. Simulation can be used to analyse the operation of dynamic and stochastic systems. ABS is particularly useful when complex interactions between system entities exist, such as autonomous decision making or negotiation. In an ABS model the researcher explicitly describes the decision process of simulated actors at the micro level. Structures emerge at the macro level as a result of the actions of the agents and their interactions with other agents and the environment. We will show how ABS experiments can deal with testing and optimising management practices such as training, empowerment or teamwork. Hence, questions such as “will staff setting their own break times improve performance?” can be investigated.
Resumo:
When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.
Resumo:
Part 18: Optimization in Collaborative Networks
Resumo:
Reconfigurable hardware can be used to build a multitasking system where tasks are assigned to HW resources at run-time according to the requirements of the running applications. These tasks are frequently represented as direct acyclic graphs and their execution is typically controlled by an embedded processor that schedules the graph execution. In order to improve the efficiency of the system, the scheduler can apply prefetch and reuse techniques that can greatly reduce the reconfiguration latencies. For an embedded processor all these computations represent a heavy computational load that can significantly reduce the system performance. To overcome this problem we have implemented a HW scheduler using reconfigurable resources. In addition we have implemented both prefetch and replacement techniques that obtain as good results as previous complex SW approaches, while demanding just a few clock cycles to carry out the computations. We consider that the HW cost of the system (in our experiments 3% of a Virtex-II PRO xc2vp30 FPGA) is affordable taking into account the great efficiency of the techniques applied to hide the reconfiguration latency and the negligible run-time penalty introduced by the scheduler computations.
Resumo:
In this work, we further extend the recently developed adaptive data analysis method, the Sparse Time-Frequency Representation (STFR) method. This method is based on the assumption that many physical signals inherently contain AM-FM representations. We propose a sparse optimization method to extract the AM-FM representations of such signals. We prove the convergence of the method for periodic signals under certain assumptions and provide practical algorithms specifically for the non-periodic STFR, which extends the method to tackle problems that former STFR methods could not handle, including stability to noise and non-periodic data analysis. This is a significant improvement since many adaptive and non-adaptive signal processing methods are not fully capable of handling non-periodic signals. Moreover, we propose a new STFR algorithm to study intrawave signals with strong frequency modulation and analyze the convergence of this new algorithm for periodic signals. Such signals have previously remained a bottleneck for all signal processing methods. Furthermore, we propose a modified version of STFR that facilitates the extraction of intrawaves that have overlaping frequency content. We show that the STFR methods can be applied to the realm of dynamical systems and cardiovascular signals. In particular, we present a simplified and modified version of the STFR algorithm that is potentially useful for the diagnosis of some cardiovascular diseases. We further explain some preliminary work on the nature of Intrinsic Mode Functions (IMFs) and how they can have different representations in different phase coordinates. This analysis shows that the uncertainty principle is fundamental to all oscillating signals.
Resumo:
Personal electronic devices, such as cell phones and tablets, continue to decrease in size while the number of features and add-ons keep increasing. One particular feature of great interest is an integrated projector system. Laser pico-projectors have been considered, but the technology has not been developed enough to warrant integration. With new advancements in diode technology and MEMS devices, laser-based projection is currently being advanced for pico-projectors. A primary problem encountered when using a pico-projector is coherent interference known as speckle. Laser speckle can lead to eye irritation and headaches after prolonged viewing. Diffractive optical elements known as diffusers have been examined as a means to lower speckle contrast. Diffusers are often rotated to achieve temporal averaging of the spatial phase pattern provided by diffuser surface. While diffusers are unable to completely eliminate speckle, they can be utilized to decrease the resultant contrast to provide a more visually acceptable image. This dissertation measures the reduction in speckle contrast achievable through the use of diffractive diffusers. A theoretical Fourier optics model is used to provide the diffuser’s stationary and in-motion performance in terms of the resultant contrast level. Contrast measurements of two diffractive diffusers are calculated theoretically and compared with experimental results. In addition, a novel binary diffuser design based on Hadamard matrices will be presented. Using two static in-line Hadamard diffusers eliminates the need for rotation or vibration of the diffuser for temporal averaging. Two Hadamard diffusers were fabricated and contrast values were subsequently measured, showing good agreement with theory and simulated values. Monochromatic speckle contrast values of 0.40 were achieved using the Hadamard diffusers. Finally, color laser projection devices require the use of red, green, and blue laser sources; therefore, using a monochromatic diffractive diffuser may not optimal for color speckle contrast reduction. A simulation of the Hadamard diffusers is conducted to determine the optimum spacing between the two diffusers for polychromatic speckle reduction. Experimental measured results are presented using the optimal spacing of Hadamard diffusers for RGB color speckle reduction, showing 60% reduction in contrast.
Resumo:
Multiferroic materials displaying coupled ferroelectric and ferromagnetic order parameters could provide a means for data storage whereby bits could be written electrically and read magnetically, or vice versa. Thin films of Aurivillius phase Bi6Ti2.8Fe1.52Mn0.68O18, previously prepared by a chemical solution deposition (CSD) technique, are multiferroics demonstrating magnetoelectric coupling at room temperature. Here, we demonstrate the growth of a similar composition, Bi6Ti2.99Fe1.46Mn0.55O18, via the liquid injection chemical vapor deposition technique. High-resolution magnetic measurements reveal a considerably higher in-plane ferromagnetic signature than CSD grown films (MS = 24.25 emu/g (215 emu/cm3), MR = 9.916 emu/g (81.5 emu/cm3), HC = 170 Oe). A statistical analysis of the results from a thorough microstructural examination of the samples, allows us to conclude that the ferromagnetic signature can be attributed to the Aurivillius phase, with a confidence level of 99.95%. In addition, we report the direct piezoresponse force microscopy visualization of ferroelectric switching while going through a full in-plane magnetic field cycle, where increased volumes (8.6 to 14% compared with 4 to 7% for the CSD-grown films) of the film engage in magnetoelectric coupling and demonstrate both irreversible and reversible magnetoelectric domain switching.
Resumo:
Incorporation of thymidine analogues in replicating DNA, coupled with antibody and fluorophore staining, allows analysis of cell proliferation, but is currently limited to monolayer cultures, fixed cells and end-point assays. We describe a simple microscopy imaging method for live real-time analysis of cell proliferation, S phase progression over several division cycles, effects of anti-proliferative drugs and other applications. It is based on the prominent (~ 1.7-fold) quenching of fluorescence lifetime of a common cell-permeable nuclear stain, Hoechst 33342 upon the incorporation of 5-bromo-2’-deoxyuridine (BrdU) in genomic DNA and detection by fluorescence lifetime imaging microscopy (FLIM). We show that quantitative and accurate FLIM technique allows high-content, multi-parametric dynamic analyses, far superior to the intensity-based imaging. We demonstrate its uses with monolayer cell cultures, complex 3D tissue models of tumor cell spheroids and intestinal organoids, and in physiological study with metformin treatment.