969 resultados para Simulation experiments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Typical Double Auction (DA) models assume that trading agents are one-way traders. With this limitation, they cannot directly reflect the fact individual traders in financial markets (the most popular application of double auction) choose their trading directions dynamically. To address this issue, we introduce the Bi-directional Double Auction (BDA) market which is populated by two-way traders. Based on experiments under both static and dynamic settings, we find that the allocative efficiency of a static continuous BDA market comes from rational selection of trading directions and is negatively related to the intelligence of trading strategies. Moreover, we introduce Kernel trading strategy designed based on probability density estimation for general DA market. Our experiments show it outperforms some intelligent DA market trading strategies. Copyright © 2013, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today's networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. ^ First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. ^ Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. ^ Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proteins are specialized molecules that catalyze most of the reactions that can sustain life, and they become functional by folding into a specific 3D structure. Despite their importance, the question, "how do proteins fold?" - first pondered in in the 1930's - is still listed as one of the top unanswered scientific questions as of 2005, according to the journal Science. Answering this question would provide a foundation for understanding protein function and would enable improved drug targeting, efficient biofuel production, and stronger biomaterials. Much of what we currently know about protein folding comes from studies on small, single-domain proteins, which may be quite different from the folding of large, multidomain proteins that predominate the proteomes of all organisms.

In this thesis I will discuss my work to fill this gap in understanding by studying the unfolding and refolding of large, multidomain proteins using the powerful combination of single-molecule force-spectroscopy experiments and molecular dynamic simulations.

The three model proteins studied - Luciferase, Protein S, and Streptavidin - lend insight into the inter-domain dependence for unfolding and the subdomain stabilization of binding ligands, and ultimately provide new insight into atomistic details of the intermediate states along the folding pathway.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the accumulation of anthropogenic carbon dioxide (CO2), a proceeding decline in seawater pH has been induced that is referred to as ocean acidification. The ocean's capacity for CO2 storage is strongly affected by biological processes, whose feedback potential is difficult to evaluate. The main source of CO2 in the ocean is the decomposition and subsequent respiration of organic molecules by heterotrophic bacteria. However, very little is known about potential effects of ocean acidification on bacterial degradation activity. This study reveals that the degradation of polysaccharides, a major component of marine organic matter, by bacterial extracellular enzymes was significantly accelerated during experimental simulation of ocean acidification. Results were obtained from pH perturbation experiments, where rates of extracellular alpha- and beta-glucosidase were measured and the loss of neutral and acidic sugars from phytoplankton-derived polysaccharides was determined. Our study suggests that a faster bacterial turnover of polysaccharides at lowered ocean pH has the potential to reduce carbon export and to enhance the respiratory CO2 production in the future ocean.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hominid evolution in the late Miocene has long been hypothesized to be linked to the retreat of the tropical rainforest in Africa. One cause for the climatic and vegetation change often considered was uplift of Africa, but also uplift of the Himalaya and the Tibetan Plateau was suggested to have impacted rainfall distribution over Africa. Recent proxy data suggest that in East Africa open grassland habitats were available to the common ancestors of hominins and apes long before their divergence and do not find evidence for a closed rainforest in the late Miocene. We used the coupled global general circulation model CCSM3 including an interactively coupled dynamic vegetation module to investigate the impact of topography on African hydro-climate and vegetation. We performed sensitivity experiments altering elevations of the Himalaya and the Tibetan Plateau as well as of East and Southern Africa. The simulations confirm the dominant impact of African topography for climate and vegetation development of the African tropics. Only a weak influence of prescribed Asian uplift on African climate could be detected. The model simulations show that rainforest coverage of Central Africa is strongly determined by the presence of elevated African topography. In East Africa, despite wetter conditions with lowered African topography, the conditions were not favorable enough to maintain a closed rainforest. A discussion of the results with respect to other model studies indicates a minor importance of vegetation-atmosphere or ocean-atmosphere feedbacks and a large dependence of the simulated vegetation response on the land surface/vegetation model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During nanoindentation and ductile-regime machining of silicon, a phenomenon known as “self-healing” takes place in that the microcracks, microfractures, and small spallings generated during the machining are filled by the plastically flowing ductile phase of silicon. However, this phenomenon has not been observed in simulation studies. In this work, using a long-range potential function, molecular dynamics simulation was used to provide an improved explanation of this mechanism. A unique phenomenon of brittle cracking was discovered, typically inclined at an angle of 45° to 55° to the cut surface, leading to the formation of periodic arrays of nanogrooves being filled by plastically flowing silicon during cutting. This observation is supported by the direct imaging. The simulated X-ray diffraction analysis proves that in contrast to experiments, Si-I to Si-II (beta tin) transformation during ductile-regime cutting is highly unlikely and solid-state amorphisation of silicon caused solely by the machining stress rather than the cutting temperature is the key to its brittle-ductile transition observed during the MD simulations

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different types of serious games have been used in elucidating computer science areas such as computer games, mobile games, Lego-based games, virtual worlds and webbased games. Different evaluation techniques have been conducted like questionnaires, interviews, discussions and tests. Simulation have been widely used in computer science as a motivational and interactive learning tool. This paper aims to evaluate the possibility of successful implementation of simulation in computer programming modules. A framework is proposed to measure the impact of serious games on enhancing students understanding of key computer science concepts. Experiments will be held on the EEECS of Queen’s University Belfast students to test the framework and attain results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spouted bed was widely used due to its good mixing of particles and effective phase transferability between the gas and solid phase. In this paper, the transportation process of particles in a 3D spouted bed was studied using the Computational Particle Fluid Dynamics (CPFD) numerical method. Experiments were conducted to verify the validity of the simulation results. Distributions of the pressure, velocities and particle concentration of transportation devices were investigated. The motion state and characteristics of multiphase flows in the transportation device were demonstrated under various operating conditions. The results showed that a good consistency was obtained between the simulated results and the experimental results. The motion characteristics of the gas-solid two-phase flow in the device was effectively predicted, which could assist the optimal operating condition estimation for the spouted transportation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Steady-state computational fluid dynamics (CFD) simulations are an essential tool in the design process of centrifugal compressors. Whilst global parameters, such as pressure ratio and efficiency, can be predicted with reasonable accuracy, the accurate prediction of detailed compressor flow fields is a much more significant challenge. Much of the inaccuracy is associated with the incorrect selection of turbulence model. The need for a quick turnaround in simulations during the design optimisation process, also demands that the turbulence model selected be robust and numerically stable with short simulation times.
In order to assess the accuracy of a number of turbulence model predictions, the current study used an exemplar open CFD test case, the centrifugal compressor ‘Radiver’, to compare the results of three eddy viscosity models and two Reynolds stress type models. The turbulence models investigated in this study were (i) Spalart-Allmaras (SA) model, (ii) the Shear Stress Transport (SST) model, (iii) a modification to the SST model denoted the SST-curvature correction (SST-CC), (iv) Reynolds stress model of Speziale, Sarkar and Gatski (RSM-SSG), and (v) the turbulence frequency formulated Reynolds stress model (RSM-ω). Each was found to be in good agreement with the experiments (below 2% discrepancy), with respect to total-to-total parameters at three different operating conditions. However, for the off-design conditions, local flow field differences were observed between the models, with the SA model showing particularly poor prediction of local flow structures. The SST-CC showed better prediction of curved rotating flows in the impeller. The RSM-ω was better for the wake and separated flow in the diffuser. The SST model showed reasonably stable, robust and time efficient capability to predict global and local flow features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of supersonic engine inlets and external aerodynamic surfaces can be critically affected by shock wave / boundary layer interactions (SBLIs), whose severe adverse pressure gradients can cause boundary layer separation. Currently such problems are avoided primarily through the use of boundary layer bleed/suction which can be a source of significant performance degradation. This study investigates a novel type of flow control device called micro-vortex generators (µVGs) which may offer similar control benefits without the bleed penalties. µVGs have the ability to alter the near-wall structure of compressible turbulent boundary layers to provide increased mixing of high speed fluid which improves the boundary layer health when subjected to flow disturbance. Due to their small size,µVGs are embedded in the boundary layer which provide reduced drag compared to the traditional vortex generators while they are cost-effective, physically robust and do not require a power source. To examine the potential of µVGs, a detailed experimental and computational study of micro-ramps in a supersonic boundary layer at Mach 3 subjected to an oblique shock was undertaken. The experiments employed a flat plate boundary layer with an impinging oblique shock with downstream total pressure measurements. The moderate Reynolds number of 3,800 based on displacement thickness allowed the computations to use Large Eddy Simulations without the subgrid stress model (LES-nSGS). The LES predictions indicated that the shock changes the structure of the turbulent eddies and the primary vortices generated from the micro-ramp. Furthermore, they generally reproduced the experimentally obtained mean velocity profiles, unlike similarly-resolved RANS computations. The experiments and the LES results indicate that the micro-ramps, whose height is h≈0.5δ, can significantly reduce boundary layer thickness and improve downstream boundary layer health as measured by the incompressible shape factor, H. Regions directly behind the ramp centerline tended to have increased boundary layer thickness indicating the significant three-dimensionality of the flow field. Compared to baseline sizes, smaller micro-ramps yielded improved total pressure recovery. Moving the smaller ramps closer to the shock interaction also reduced the displacement thickness and the separated area. This effect is attributed to decreased wave drag and the closer proximity of the vortex pairs to the wall. In the second part of the study, various types of µVGs are investigated including micro-ramps and micro-vanes. The results showed that vortices generated from µVGs can partially eliminate shock induced flow separation and can continue to entrain high momentum flux for boundary layer recovery downstream. The micro-ramps resulted in thinner downstream displacement thickness in comparison to the micro-vanes. However, the strength of the streamwise vorticity for the micro-ramps decayed faster due to dissipation especially after the shock interaction. In addition, the close spanwise distance between each vortex for the ramp geometry causes the vortex cores to move upwards from the wall due to induced upwash effects. Micro-vanes, on the other hand, yielded an increased spanwise spacing of the streamwise vortices at the point of formation. This resulted in streamwise vortices staying closer to the wall with less circulation decay, and the reduction in overall flow separation is attributed to these effects. Two hybrid concepts, named “thick-vane” and “split-ramp”, were also studied where the former is a vane with side supports and the latter has a uniform spacing along the centerline of the baseline ramp. These geometries behaved similar to the micro-vanes in terms of the streamwise vorticity and the ability to reduce flow separation, but are more physically robust than the thin vanes. Next, Mach number effect on flow past the micro-ramps (h~0.5δ) are examined in a supersonic boundary layer at M=1.4, 2.2 and 3.0, but with no shock waves present. The LES results indicate that micro-ramps have a greater impact at lower Mach number near the device but its influence decays faster than that for the higher Mach number cases. This may be due to the additional dissipation caused by the primary vortices with smaller effective diameter at the lower Mach number such that their coherency is easily lost causing the streamwise vorticity and the turbulent kinetic energy to decay quickly. The normal distance between the vortex core and the wall had similar growth indicating weak correlation with the Mach number; however, the spanwise distance between the two counter-rotating cores further increases with lower Mach number. Finally, various µVGs which include micro-ramp, split-ramp and a new hybrid concept “ramped-vane” are investigated under normal shock conditions at Mach number of 1.3. In particular, the ramped-vane was studied extensively by varying its size, interior spacing of the device and streamwise position respect to the shock. The ramped-vane provided increased vorticity compared to the micro-ramp and the split-ramp. This significantly reduced the separation length downstream of the device centerline where a larger ramped-vane with increased trailing edge gap yielded a fully attached flow at the centerline of separation region. The results from coarse-resolution LES studies show that the larger ramped-vane provided the most reductions in the turbulent kinetic energy and pressure fluctuation compared to other devices downstream of the shock. Additional benefits include negligible drag while the reductions in displacement thickness and shape factor were seen compared to other devices. Increased wall shear stress and pressure recovery were found with the larger ramped-vane in the baseline resolution LES studies which also gave decreased amplitudes of the pressure fluctuations downstream of the shock.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermal Diagnostics experiments to be carried out on board LISA Pathfinder (LPF) will yield a detailed characterisation of how temperature fluctuations affect the LTP (LISA Technology Package) instrument performance, a crucial information for future space based gravitational wave detectors as the proposed eLISA. Amongst them, the study of temperature gradient fluctuations around the test masses of the Inertial Sensors will provide as well information regarding the contribution of the Brownian noise, which is expected to limit the LTP sensitivity at frequencies close to 1mHz during some LTP experiments. In this paper we report on how these kind of Thermal Diagnostics experiments were simulated in the last LPF Simulation Campaign (November, 2013) involving all the LPF Data Analysis team and using an end-to-end simulator of the whole spacecraft. Such simulation campaign was conducted under the framework of the preparation for LPF operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.