939 resultados para Simulation experiments
Resumo:
With the accumulation of anthropogenic carbon dioxide (CO2), a proceeding decline in seawater pH has been induced that is referred to as ocean acidification. The ocean's capacity for CO2 storage is strongly affected by biological processes, whose feedback potential is difficult to evaluate. The main source of CO2 in the ocean is the decomposition and subsequent respiration of organic molecules by heterotrophic bacteria. However, very little is known about potential effects of ocean acidification on bacterial degradation activity. This study reveals that the degradation of polysaccharides, a major component of marine organic matter, by bacterial extracellular enzymes was significantly accelerated during experimental simulation of ocean acidification. Results were obtained from pH perturbation experiments, where rates of extracellular alpha- and beta-glucosidase were measured and the loss of neutral and acidic sugars from phytoplankton-derived polysaccharides was determined. Our study suggests that a faster bacterial turnover of polysaccharides at lowered ocean pH has the potential to reduce carbon export and to enhance the respiratory CO2 production in the future ocean.
Resumo:
Hominid evolution in the late Miocene has long been hypothesized to be linked to the retreat of the tropical rainforest in Africa. One cause for the climatic and vegetation change often considered was uplift of Africa, but also uplift of the Himalaya and the Tibetan Plateau was suggested to have impacted rainfall distribution over Africa. Recent proxy data suggest that in East Africa open grassland habitats were available to the common ancestors of hominins and apes long before their divergence and do not find evidence for a closed rainforest in the late Miocene. We used the coupled global general circulation model CCSM3 including an interactively coupled dynamic vegetation module to investigate the impact of topography on African hydro-climate and vegetation. We performed sensitivity experiments altering elevations of the Himalaya and the Tibetan Plateau as well as of East and Southern Africa. The simulations confirm the dominant impact of African topography for climate and vegetation development of the African tropics. Only a weak influence of prescribed Asian uplift on African climate could be detected. The model simulations show that rainforest coverage of Central Africa is strongly determined by the presence of elevated African topography. In East Africa, despite wetter conditions with lowered African topography, the conditions were not favorable enough to maintain a closed rainforest. A discussion of the results with respect to other model studies indicates a minor importance of vegetation-atmosphere or ocean-atmosphere feedbacks and a large dependence of the simulated vegetation response on the land surface/vegetation model.
Resumo:
During nanoindentation and ductile-regime machining of silicon, a phenomenon known as “self-healing” takes place in that the microcracks, microfractures, and small spallings generated during the machining are filled by the plastically flowing ductile phase of silicon. However, this phenomenon has not been observed in simulation studies. In this work, using a long-range potential function, molecular dynamics simulation was used to provide an improved explanation of this mechanism. A unique phenomenon of brittle cracking was discovered, typically inclined at an angle of 45° to 55° to the cut surface, leading to the formation of periodic arrays of nanogrooves being filled by plastically flowing silicon during cutting. This observation is supported by the direct imaging. The simulated X-ray diffraction analysis proves that in contrast to experiments, Si-I to Si-II (beta tin) transformation during ductile-regime cutting is highly unlikely and solid-state amorphisation of silicon caused solely by the machining stress rather than the cutting temperature is the key to its brittle-ductile transition observed during the MD simulations
Resumo:
Different types of serious games have been used in elucidating computer science areas such as computer games, mobile games, Lego-based games, virtual worlds and webbased games. Different evaluation techniques have been conducted like questionnaires, interviews, discussions and tests. Simulation have been widely used in computer science as a motivational and interactive learning tool. This paper aims to evaluate the possibility of successful implementation of simulation in computer programming modules. A framework is proposed to measure the impact of serious games on enhancing students understanding of key computer science concepts. Experiments will be held on the EEECS of Queen’s University Belfast students to test the framework and attain results.
Resumo:
The spouted bed was widely used due to its good mixing of particles and effective phase transferability between the gas and solid phase. In this paper, the transportation process of particles in a 3D spouted bed was studied using the Computational Particle Fluid Dynamics (CPFD) numerical method. Experiments were conducted to verify the validity of the simulation results. Distributions of the pressure, velocities and particle concentration of transportation devices were investigated. The motion state and characteristics of multiphase flows in the transportation device were demonstrated under various operating conditions. The results showed that a good consistency was obtained between the simulated results and the experimental results. The motion characteristics of the gas-solid two-phase flow in the device was effectively predicted, which could assist the optimal operating condition estimation for the spouted transportation process.
Resumo:
Steady-state computational fluid dynamics (CFD) simulations are an essential tool in the design process of centrifugal compressors. Whilst global parameters, such as pressure ratio and efficiency, can be predicted with reasonable accuracy, the accurate prediction of detailed compressor flow fields is a much more significant challenge. Much of the inaccuracy is associated with the incorrect selection of turbulence model. The need for a quick turnaround in simulations during the design optimisation process, also demands that the turbulence model selected be robust and numerically stable with short simulation times.
In order to assess the accuracy of a number of turbulence model predictions, the current study used an exemplar open CFD test case, the centrifugal compressor ‘Radiver’, to compare the results of three eddy viscosity models and two Reynolds stress type models. The turbulence models investigated in this study were (i) Spalart-Allmaras (SA) model, (ii) the Shear Stress Transport (SST) model, (iii) a modification to the SST model denoted the SST-curvature correction (SST-CC), (iv) Reynolds stress model of Speziale, Sarkar and Gatski (RSM-SSG), and (v) the turbulence frequency formulated Reynolds stress model (RSM-ω). Each was found to be in good agreement with the experiments (below 2% discrepancy), with respect to total-to-total parameters at three different operating conditions. However, for the off-design conditions, local flow field differences were observed between the models, with the SA model showing particularly poor prediction of local flow structures. The SST-CC showed better prediction of curved rotating flows in the impeller. The RSM-ω was better for the wake and separated flow in the diffuser. The SST model showed reasonably stable, robust and time efficient capability to predict global and local flow features.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The performance of supersonic engine inlets and external aerodynamic surfaces can be critically affected by shock wave / boundary layer interactions (SBLIs), whose severe adverse pressure gradients can cause boundary layer separation. Currently such problems are avoided primarily through the use of boundary layer bleed/suction which can be a source of significant performance degradation. This study investigates a novel type of flow control device called micro-vortex generators (µVGs) which may offer similar control benefits without the bleed penalties. µVGs have the ability to alter the near-wall structure of compressible turbulent boundary layers to provide increased mixing of high speed fluid which improves the boundary layer health when subjected to flow disturbance. Due to their small size,µVGs are embedded in the boundary layer which provide reduced drag compared to the traditional vortex generators while they are cost-effective, physically robust and do not require a power source. To examine the potential of µVGs, a detailed experimental and computational study of micro-ramps in a supersonic boundary layer at Mach 3 subjected to an oblique shock was undertaken. The experiments employed a flat plate boundary layer with an impinging oblique shock with downstream total pressure measurements. The moderate Reynolds number of 3,800 based on displacement thickness allowed the computations to use Large Eddy Simulations without the subgrid stress model (LES-nSGS). The LES predictions indicated that the shock changes the structure of the turbulent eddies and the primary vortices generated from the micro-ramp. Furthermore, they generally reproduced the experimentally obtained mean velocity profiles, unlike similarly-resolved RANS computations. The experiments and the LES results indicate that the micro-ramps, whose height is h≈0.5δ, can significantly reduce boundary layer thickness and improve downstream boundary layer health as measured by the incompressible shape factor, H. Regions directly behind the ramp centerline tended to have increased boundary layer thickness indicating the significant three-dimensionality of the flow field. Compared to baseline sizes, smaller micro-ramps yielded improved total pressure recovery. Moving the smaller ramps closer to the shock interaction also reduced the displacement thickness and the separated area. This effect is attributed to decreased wave drag and the closer proximity of the vortex pairs to the wall. In the second part of the study, various types of µVGs are investigated including micro-ramps and micro-vanes. The results showed that vortices generated from µVGs can partially eliminate shock induced flow separation and can continue to entrain high momentum flux for boundary layer recovery downstream. The micro-ramps resulted in thinner downstream displacement thickness in comparison to the micro-vanes. However, the strength of the streamwise vorticity for the micro-ramps decayed faster due to dissipation especially after the shock interaction. In addition, the close spanwise distance between each vortex for the ramp geometry causes the vortex cores to move upwards from the wall due to induced upwash effects. Micro-vanes, on the other hand, yielded an increased spanwise spacing of the streamwise vortices at the point of formation. This resulted in streamwise vortices staying closer to the wall with less circulation decay, and the reduction in overall flow separation is attributed to these effects. Two hybrid concepts, named “thick-vane” and “split-ramp”, were also studied where the former is a vane with side supports and the latter has a uniform spacing along the centerline of the baseline ramp. These geometries behaved similar to the micro-vanes in terms of the streamwise vorticity and the ability to reduce flow separation, but are more physically robust than the thin vanes. Next, Mach number effect on flow past the micro-ramps (h~0.5δ) are examined in a supersonic boundary layer at M=1.4, 2.2 and 3.0, but with no shock waves present. The LES results indicate that micro-ramps have a greater impact at lower Mach number near the device but its influence decays faster than that for the higher Mach number cases. This may be due to the additional dissipation caused by the primary vortices with smaller effective diameter at the lower Mach number such that their coherency is easily lost causing the streamwise vorticity and the turbulent kinetic energy to decay quickly. The normal distance between the vortex core and the wall had similar growth indicating weak correlation with the Mach number; however, the spanwise distance between the two counter-rotating cores further increases with lower Mach number. Finally, various µVGs which include micro-ramp, split-ramp and a new hybrid concept “ramped-vane” are investigated under normal shock conditions at Mach number of 1.3. In particular, the ramped-vane was studied extensively by varying its size, interior spacing of the device and streamwise position respect to the shock. The ramped-vane provided increased vorticity compared to the micro-ramp and the split-ramp. This significantly reduced the separation length downstream of the device centerline where a larger ramped-vane with increased trailing edge gap yielded a fully attached flow at the centerline of separation region. The results from coarse-resolution LES studies show that the larger ramped-vane provided the most reductions in the turbulent kinetic energy and pressure fluctuation compared to other devices downstream of the shock. Additional benefits include negligible drag while the reductions in displacement thickness and shape factor were seen compared to other devices. Increased wall shear stress and pressure recovery were found with the larger ramped-vane in the baseline resolution LES studies which also gave decreased amplitudes of the pressure fluctuations downstream of the shock.
Resumo:
Thermal Diagnostics experiments to be carried out on board LISA Pathfinder (LPF) will yield a detailed characterisation of how temperature fluctuations affect the LTP (LISA Technology Package) instrument performance, a crucial information for future space based gravitational wave detectors as the proposed eLISA. Amongst them, the study of temperature gradient fluctuations around the test masses of the Inertial Sensors will provide as well information regarding the contribution of the Brownian noise, which is expected to limit the LTP sensitivity at frequencies close to 1mHz during some LTP experiments. In this paper we report on how these kind of Thermal Diagnostics experiments were simulated in the last LPF Simulation Campaign (November, 2013) involving all the LPF Data Analysis team and using an end-to-end simulator of the whole spacecraft. Such simulation campaign was conducted under the framework of the preparation for LPF operations.
Resumo:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
Resumo:
Intelligent agents offer a new and exciting way of understanding the world of work. Agent-Based Simulation (ABS), one way of using intelligent agents, carries great potential for progressing our understanding of management practices and how they link to retail performance. We have developed simulation models based on research by a multi-disciplinary team of economists, work psychologists and computer scientists. We will discuss our experiences of implementing these concepts working with a well-known retail department store. There is no doubt that management practices are linked to the performance of an organisation (Reynolds et al., 2005; Wall & Wood, 2005). Best practices have been developed, but when it comes down to the actual application of these guidelines considerable ambiguity remains regarding their effectiveness within particular contexts (Siebers et al., forthcoming a). Most Operational Research (OR) methods can only be used as analysis tools once management practices have been implemented. Often they are not very useful for giving answers to speculative ‘what-if’ questions, particularly when one is interested in the development of the system over time rather than just the state of the system at a certain point in time. Simulation can be used to analyse the operation of dynamic and stochastic systems. ABS is particularly useful when complex interactions between system entities exist, such as autonomous decision making or negotiation. In an ABS model the researcher explicitly describes the decision process of simulated actors at the micro level. Structures emerge at the macro level as a result of the actions of the agents and their interactions with other agents and the environment. We will show how ABS experiments can deal with testing and optimising management practices such as training, empowerment or teamwork. Hence, questions such as “will staff setting their own break times improve performance?” can be investigated.
Resumo:
Agent-based modelling and simulation offers a new and exciting way of understanding the world of work. In this paper we describe the development of an agent-based simulation model, designed to help to understand the relationship between human resource management practices and retail productivity. We report on the current development of our simulation model which includes new features concerning the evolution of customers over time. To test some of these features we have conducted a series of experiments dealing with customer pool sizes, standard and noise reduction modes, and the spread of the word of mouth. Our multidisciplinary research team draws upon expertise from work psychologists and computer scientists. Despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents offer potential for fostering sustainable organisational capabilities in the future.
Resumo:
FEA simulation of thermal metal cutting is central to interactive design and manufacturing. It is therefore relevant to assess the applicability of FEA open software to simulate 2D heat transfer in metal sheet laser cuts. Application of open source code (e.g. FreeFem++, FEniCS, MOOSE) makes possible additional scenarios (e.g. parallel, CUDA, etc.), with lower costs. However, a precise assessment is required on the scenarios in which open software can be a sound alternative to a commercial one. This article contributes in this regard, by presenting a comparison of the aforementioned freeware FEM software for the simulation of heat transfer in thin (i.e. 2D) sheets, subject to a gliding laser point source. We use the commercial ABAQUS software as the reference to compare such open software. A convective linear thin sheet heat transfer model, with and without material removal is used. This article does not intend a full design of computer experiments. Our partial assessment shows that the thin sheet approximation turns to be adequate in terms of the relative error for linear alumina sheets. Under mesh resolutions better than 10e−5 m , the open and reference software temperature differ in at most 1 % of the temperature prediction. Ongoing work includes adaptive re-meshing, nonlinearities, sheet stress analysis and Mach (also called ‘relativistic’) effects.
Resumo:
Back-pressure on a diesel engine equipped with an aftertreatment system is a function of the pressure drop across the individual components of the aftertreatment system, typically, a diesel oxidation catalyst (DOC), catalyzed particulate filter (CPF) and selective catalytic reduction (SCR) catalyst. Pressure drop across the CPF is a function of the mass flow rate and the temperature of the exhaust flowing through it as well as the mass of particulate matter (PM) retained in the substrate wall and the cake layer that forms on the substrate wall. Therefore, in order to control the back-pressure on the engine at low levels and to minimize the fuel consumption, it is important to control the PM mass retained in the CPF. Chemical reactions involving the oxidation of PM under passive oxidation and active regeneration conditions can be utilized with computer numerical models in the engine control unit (ECU) to control the pressure drop across the CPF. Hence, understanding and predicting the filtration and oxidation of PM in the CPF and the effect of these processes on the pressure drop across the CPF are necessary for developing control strategies for the aftertreatment system to reduce back-pressure on the engine and in turn fuel consumption particularly from active regeneration. Numerical modeling of CPF's has been proven to reduce development time and the cost of aftertreatment systems used in production as well as to facilitate understanding of the internal processes occurring during different operating conditions that the particulate filter is subjected to. A numerical model of the CPF was developed in this research work which was calibrated to data from passive oxidation and active regeneration experiments in order to determine the kinetic parameters for oxidation of PM and nitrogen oxides along with the model filtration parameters. The research results include the comparison between the model and the experimental data for pressure drop, PM mass retained, filtration efficiencies, CPF outlet gas temperatures and species (NO2) concentrations out of the CPF. Comparisons of PM oxidation reaction rates obtained from the model calibration to the data from the experiments for ULSD, 10 and 20% biodiesel-blended fuels are presented.
Resumo:
Fluorescence spectroscopy andmicroscopy have been utilized as tools in membrane biophysics for decades now. Because phospholipids are non-fluorescent, the use of extrinsic membrane probes in this context is commonplace. Among the latter, 1,6-diphenylhexatriene (DPH) and its trimethylammonium derivative (TMA-DPH) have been extensively used. It is widely believed that, owing to its additional charged group, TMA-DPH is anchored at the lipid/water interface and reports on a bilayer region that is distinct from that of the hydrophobic DPH. In this study, we employ atomistic MD simulations to characterize the behavior of DPH and TMA-DPH in 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and POPC/cholesterol (4:1) bilayers. We show that although the dynamics of TMA-DPH in thesemembranes is noticeably more hindered than that of DPH, the location of the average fluorophore of TMA-DPH is only ~3–4 Å more shallow than that of DPH. The hindrance observed in the translational and rotational motions of TMA-DPH compared to DPH is mainly not due to significant differences in depth, but to the favorable electrostatic interactions of the former with electronegative lipid atoms instead. By revealing detailed insights on the behavior of these two probes, our results are useful both in the interpretation of past work and in the planning of future experiments using themasmembrane reporters.