993 resultados para Computational experiment
Resumo:
Shear layers shed by aircraft wings roll up into vortices. A similar, though far less common, phenomenon can occur in the wake of a turbomachine blade. This paper presents experimental data from a new single stage turbine that has been commissioned at the Whittle Laboratory. Two low aspect ratio stators have been tested with the same rotor row. Surface flow visualisation illustrates the extremely strong secondary flows present in both NGV designs. These secondary flows lead to conventional passage vortices but also to an intense vortex sheet which is shed from the trailing edge of the blades. Pneumatic probe traverse show how this sheet rolls up into a concentrated vortex in the second stator design, but not in the first. A simple numerical experiment is used to model the shear layer instability and the effects of trailing edge shape and exit yaw angle distribution are investigated. It is found that the latter has a strong influence on shear layer rollup: inhibiting the formation of a vortex downstream of NGV 1 but encouraging it behind NGV 2.
Resumo:
The DDES method is employed to investigate the complex physics involved in supersonic combustion and in particular in the SCHOLAR scramjet test case. The influence on computational results of prescribing turbulent fluctuations at the entrance to the combustion chamber is investigated. The interaction of shock waves, vortices, turbulence and combustion is studied and the existence of secondary vortices on the upper will of the combustor is proposed. © 2012 by Peter Cocks.
Resumo:
Discrete element modeling is being used increasingly to simulate flow in fluidized beds. These models require complex measurement techniques to provide validation for the approximations inherent in the model. This paper introduces the idea of modeling the experiment to ensure that the validation is accurate. Specifically, a 3D, cylindrical gas-fluidized bed was simulated using a discrete element model (DEM) for particle motion coupled with computational fluid dynamics (CFD) to describe the flow of gas. The results for time-averaged, axial velocity during bubbling fluidization were compared with those from magnetic resonance (MR) experiments made on the bed. The DEM-CFD data were postprocessed with various methods to produce time-averaged velocity maps for comparison with the MR results, including a method which closely matched the pulse sequence and data processing procedure used in the MR experiments. The DEM-CFD results processed with the MR-type time-averaging closely matched experimental MR results, validating the DEM-CFD model. Analysis of different averaging procedures confirmed that MR time-averages of dynamic systems correspond to particle-weighted averaging, rather than frame-weighted averaging, and also demonstrated that the use of Gaussian slices in MR imaging of dynamic systems is valid. © 2013 American Chemical Society.
Resumo:
We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated.
Resumo:
Full-scale furnished cabin fires have been studied experimentally for the purpose of characterising the post-crash cabin fire environment by the US Federal Aviation Administration for many years. In this paper the Computational Fluid Dynamics fire field model SMARTFIRE is used to simulate one of these fires conducted in the C-133 test facility in order to provide further validation of the computational approach and the SMARTFIRE software. The experiment involves exposing the interior cabin materials to an external fuel fire, opening only one exit at the far end of the cabin (the same side as the rupture) for ventilation, and noting the subsequent spread of the external fire to the cabin interior and the onset of flashover at approximately 210 seconds. Through this analysis, the software is shown to be in good agreement with the experimental data, producing reasonable agreement with the fire dynamics prior to flashover and producing a reasonable prediction of the flashover time i.e. 225 seconds. The paper then proceeds to utilize the model to examine the impact on flashover time of the extent of cabin furnishings and cabin ventilation provided by available exits
Resumo:
Six challenges are discussed. These are the laser-driven helium atom; the laser-driven hydrogen molecule and hydrogen molecular ion: electron scattering (with ionization) from one-electron atoms; the vibrational and rotational structure of molecules such as H-3(+) and water at their dissociation limits; laser- heated clusters; and quantum degeneracy and Bose-Einstein condensation. The first four concern fundamental few-body systems where use of high-performance computing (HPC) is currently making possible accurate modelling from first principles. This leads to reliable predictions and support for laboratory experiment as well as true understanding of the dynamics. Important aspects of these challenges addressable only via a terascale facility are set out. Such a facility makes the last two challenges in the above list meaningfully accessible for the first time, and the scientific interest together with the prospective role for HPC in these is emphasized.
Resumo:
The design of medical devices could be very much improved if robust tools were available for computational simulation of tissue response to the presence of the implant. Such tools require algorithms to simulate the response of tissues to mechanical and chemical stimuli. Available methodologies include those based on the principle of mechanical homeostasis, those which use continuum models to simulate biological constituents, and the cell-centred approach, which models cells as autonomous agents. In the latter approach, cell behaviour is governed by rules based on the state of the local environment around the cell; and informed by experiment. Tissue growth and differentiation requires simulating many of these cells together. In this paper, the methodology and applications of cell-centred techniques-with particular application to mechanobiology-are reviewed, and a cell-centred model of tissue formation in the lumen of an artery in response to the deployment of a stent is presented. The method is capable of capturing some of the most important aspects of restenosis, including nonlinear lesion growth with time. The approach taken in this paper provides a framework for simulating restenosis; the next step will be to couple it with more patient-specific geometries and quantitative parameter data.
Resumo:
Background: Large-scale biological jobs on high-performance computing systems require manual intervention if one or more computing cores on which they execute fail. This places not only a cost on the maintenance of the job, but also a cost on the time taken for reinstating the job and the risk of losing data and execution accomplished by the job before it failed. Approaches which can proactively detect computing core failures and take action to relocate the computing core's job onto reliable cores can make a significant step towards automating fault tolerance. Method: This paper describes an experimental investigation into the use of multi-agent approaches for fault tolerance. Two approaches are studied, the first at the job level and the second at the core level. The approaches are investigated for single core failure scenarios that can occur in the execution of parallel reduction algorithms on computer clusters. A third approach is proposed that incorporates multi-agent technology both at the job and core level. Experiments are pursued in the context of genome searching, a popular computational biology application. Result: The key conclusion is that the approaches proposed are feasible for automating fault tolerance in high-performance computing systems with minimal human intervention. In a typical experiment in which the fault tolerance is studied, centralised and decentralised checkpointing approaches on an average add 90% to the actual time for executing the job. On the other hand, in the same experiment the multi-agent approaches add only 10% to the overall execution time
Resumo:
In the Biodiversity World (BDW) project we have created a flexible and extensible Web Services-based Grid environment for biodiversity researchers to solve problems in biodiversity and analyse biodiversity patterns. In this environment, heterogeneous and globally distributed biodiversity-related resources such as data sets and analytical tools are made available to be accessed and assembled by users into workflows to perform complex scientific experiments. One such experiment is bioclimatic modelling of the geographical distribution of individual species using climate variables in order to predict past and future climate-related changes in species distribution. Data sources and analytical tools required for such analysis of species distribution are widely dispersed, available on heterogeneous platforms, present data in different formats and lack interoperability. The BDW system brings all these disparate units together so that the user can combine tools with little thought as to their availability, data formats and interoperability. The current Web Servicesbased Grid environment enables execution of the BDW workflow tasks in remote nodes but with a limited scope. The next step in the evolution of the BDW architecture is to enable workflow tasks to utilise computational resources available within and outside the BDW domain. We describe the present BDW architecture and its transition to a new framework which provides a distributed computational environment for mapping and executing workflows in addition to bringing together heterogeneous resources and analytical tools.
Resumo:
Despite many decades investigating scalp recordable 8–13-Hz (alpha) electroencephalographic activity, no consensus has yet emerged regarding its physiological origins nor its functional role in cognition. Here we outline a detailed, physiologically meaningful, theory for the genesis of this rhythm that may provide important clues to its functional role. In particular we find that electroencephalographically plausible model dynamics, obtained with physiological admissible parameterisations, reveals a cortex perched on the brink of stability, which when perturbed gives rise to a range of unanticipated complex dynamics that include 40-Hz (gamma) activity. Preliminary experimental evidence, involving the detection of weak nonlinearity in resting EEG using an extension of the well-known surrogate data method, suggests that nonlinear (deterministic) dynamics are more likely to be associated with weakly damped alpha activity. Thus rather than the “alpha rhythm” being an idling rhythm it may be more profitable to conceive it as a readiness rhythm.
Resumo:
Several accounts put forth to explain the flash-lag effect (FLE) rely mainly on either spatial or temporal mechanisms. Here we investigated the relationship between these mechanisms by psychophysical and theoretical approaches. In a first experiment we assessed the magnitudes of the FLE and temporal-order judgments performed under identical visual stimulation. The results were interpreted by means of simulations of an artificial neural network, that wits also employed to make predictions concerning the F LE. The model predicted that a spatio-temporal mislocalisation would emerge from two, continuous and abrupt-onset, moving stimuli. Additionally, a straightforward prediction of the model revealed that the magnitude of this mislocalisation should be task-dependent, increasing when the use of the abrupt-onset moving stimulus switches from a temporal marker only to both temporal and spatial markers. Our findings confirmed the model`s predictions and point to an indissoluble interplay between spatial facilitation and processing delays in the FLE.
Resumo:
This thesis presents and uses the techniques of computational chemistry to explore two different processes induced in human skin by ultraviolet light. The first is the transformation of urocanic acid into a immunosuppressing agent, and the other is the enzymatic action of the 8-oxoguanine glycosylase enzyme. The photochemistry of urocanic acid is investigated by time-dependent density functional theory. Vertical absorption spectra of the molecule in different forms and environments is assigned and candidate states for the photochemistry at different wavelengths are identified. Molecular dynamics simulations of urocanic acid in gas phase and aqueous solution reveals considerable flexibility under experimental conditions, particularly for for the cis isomer where competition between intra- and inter-molecular interactions increases flexibility. A model to explain the observed gas phase photochemistry of urocanic acid is developed and it is shown that a reinterpretation in terms of a mixture between isomers significantly enhances the agreement between theory and experiment , and resolves several peculiarities in the spectrum. A model for the photochemistry in the aqueous phase of urocanic acid is then developed, in which two excited states governs the efficiency of photoisomerization. The point of entrance into a conical intersection seam is shown to explain the wavelength dependence of photoisomerization quantum yield. Finally some mechanistic aspects of the DNA repair enzyme 8-oxoguanine glycosylase is investigated with density functional theory. It is found that the critical amino acid of the active site can provide catalytic power in several different manners, and that a recent proposal involving a SN1 type of mechanism seems the most efficient one.
Resumo:
In this thesis the evolution of the techno-social systems analysis methods will be reported, through the explanation of the various research experience directly faced. The first case presented is a research based on data mining of a dataset of words association named Human Brain Cloud: validation will be faced and, also through a non-trivial modeling, a better understanding of language properties will be presented. Then, a real complex system experiment will be introduced: the WideNoise experiment in the context of the EveryAware european project. The project and the experiment course will be illustrated and data analysis will be displayed. Then the Experimental Tribe platform for social computation will be introduced . It has been conceived to help researchers in the implementation of web experiments, and aims also to catalyze the cumulative growth of experimental methodologies and the standardization of tools cited above. In the last part, three other research experience which already took place on the Experimental Tribe platform will be discussed in detail, from the design of the experiment to the analysis of the results and, eventually, to the modeling of the systems involved. The experiments are: CityRace, about the measurement of human traffic-facing strategies; laPENSOcosì, aiming to unveil the political opinion structure; AirProbe, implemented again in the EveryAware project framework, which consisted in monitoring air quality opinion shift of a community informed about local air pollution. At the end, the evolution of the technosocial systems investigation methods shall emerge together with the opportunities and the threats offered by this new scientific path.
Resumo:
Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.
Resumo:
This tutorial gives a step by step explanation of how one uses experimental data to construct a biologically realistic multicompartmental model. Special emphasis is given on the many ways that this process can be imprecise. The tutorial is intended for both experimentalists who want to get into computer modeling and for computer scientists who use abstract neural network models but are curious about biological realistic modeling. The tutorial is not dependent on the use of a specific simulation engine, but rather covers the kind of data needed for constructing a model, how they are used, and potential pitfalls in the process.