959 resultados para Fluid dynamics -- Computer simulation
Resumo:
Diplomityön tavoitteena oli tarkastella numeerisen virtauslaskennan avulla virtaukseen liittyviä ilmiöitä ja kaasun dispersiota. Diplomityön sisältö on jaettu viiteen osaan; johdantoon, teoriaan, katsaukseen virtauksen mallinnukseen huokoisessa materiaalissa liittyviin tutkimusselvityksiin, numeeriseen mallinnukseen sekä tulosten esittämiseen ja johtopäätöksiin. Diplomityön alussa kiinnitettiin huomiota erilaisiin kokeellisiin, numeerisiin ja teoreettisiin mallinnusmenetelmiin, joilla voidaan mallintaa virtausta huokoisessa materiaalissa. Kirjallisuusosassa tehtiin katsaus aikaisemmin julkaistuihin puoliempiirisiin ja empiirisiin tutkimusselvityksiin, jotka liittyvät huokoisen materiaalin aiheuttamaan painehäviöön. Numeerisessa virtauslaskenta osassa rakennettiin ja esitettiin huokoista materiaalia kuvaavat numeeriset mallit käyttäen kaupallista FLUENT -ohjelmistoa. Työn lopussa arvioitiin teorian, numeerisen virtauslaskennan ja kokeellisten tutkimusselvitysten tuloksia. Kolmiulotteisen huokoisen materiaalinnumeerisessa mallinnuksesta saadut tulokset vaikuttivat lupaavilta. Näiden tulosten perusteella tehtiin suosituksia ajatellen tulevaa virtauksen mallinnusta huokoisessa materiaalissa. Osa tässä diplomityössä esitetyistä tuloksista tullaan esittämään 55. Kanadan Kemiantekniikan konferenssissa Torontossa 1619 Lokakuussa 2005. ASME :n kansainvälisessä tekniikan alan julkaisussa. Työ on hyväksytty esitettäväksi esitettäväksi laskennallisen virtausmekaniikan (CFD) aihealueessa 'Peruskäsitteet'. Lisäksi työn yksityiskohtaiset tulokset tullaan lähettämään myös CES:n julkaisuun.
Resumo:
Päästöjen vähentäminen on ollut viime vuosina tärkeässä osassa polttomoottoreita kehitettäessä.Monet viralliset tahot asettavat uusia tiukempia päästörajoituksia. Päästörajatovat tyypillisesti olleet tiukimmat autoteollisuuden valmistamille pienille nopeakäyntisille diesel-moottoreille, mutta viime aikoina paineita on kohdistunut myös suurempiin keskinopeisiin ja hidaskäyntisiin diesel-moottoreihin. Päästörajat ovat erilaisia riippuen moottorin tyypistä, käytetystä polttoaineesta ja paikasta missä moottoria käytetään johtuen erilaisista paikallisista laeista ja asetuksista. Eniten huomiota diesel-moottorin päästöissä täytyy kohdistaa typen oksideihin, savun muodostukseen sekä partikkeleihin. Laskennallisen virtausmekaniikan (CFD) avulla on hyvät mahdollisuudet tutkia diesel-moottorin sylinterissä tapahtuvia ilmiöitä palamisen aikana. CFD on hyödyllinen työkalu arvioitaessa moottorin suorituskykyä ja päästöjen muodostumista. CFD:llä on mahdollista testata erilaisten parametrien ja geometrioiden vaikutusta ilman kalliita moottorinkoeajoja. CFD:tä voidaan käyttää myös opetustarkoituksessa lisäämään paloprosessin tuntemusta. Tulevaisuudessa palamissimuloinnit CFD:llä tulevat epäilemättä olemaan tärkeä osa moottorin kehityksessä. Tässä diplomityössä on tehty palamissimuloinnit kahteen erilaisilla poittoaineenruiskutuslaitteistoilla varustettuun Wärtsilän keskinopeaan diesel-moottoriin. W46 moottorin ruiskutuslaitteisto on perinteinen mekaanisesti ohjattu pumppusuutin ja W46-CR moottorissa on elektronisesti ohjattu 'common rail' ruiskutuslaitteisto. Näiden moottorien ja käytössä olevien ruiskutusprofiilien lisäksi on simuloinneilla testattu erilaisia uusia ruiskutusprofiileja, jotta erityyppisten profiilien hyvät ja huonot ominaisuudet tulisivat selville. Matalalla kuormalla kiinnostuksen kohteena on nokipäästöjen muodostus ja täydellä kuormalla NOx-päästöjen muodostus ja polttoaineen kulutus. Simulointien tulokset osoittivat, että noen muodostusta matalalla kuormalla voidaan selvästi vähentää monivaiheisella ruiskutuksella, jossa yksi ruiskutusjakso jaetaan kahteen tai useampaan jaksoon. Erityisen tehokas noen vähentämisessä vaikuttaa olevan ns. jälkiruiskutus (post injection). Matalat NOx-päästöt ja hyvä polttoaineen kulutus täydellä kuormalla on mahdollista saavuttaaasteittain nostettavalla ruiskutusnopeudella.
Resumo:
The flexibility of different regions of HIV-1 protease was examined by using a database consisting of 73 X-ray structures that differ in terms of sequence, ligands or both. The root-mean-square differences of the backbone for the set of structures were shown to have the same variation with residue number as those obtained from molecular dynamics simulations, normal mode analyses and X-ray B-factors. This supports the idea that observed structural changes provide a measure of the inherent flexibility of the protein, although specific interactions between the protease and the ligand play a secondary role. The results suggest that the potential energy surface of the HIV-1 protease is characterized by many local minima with small energetic differences, some of which are sampled by the different X-ray structures of the HIV-1 protease complexes. Interdomain correlated motions were calculated from the structural fluctuations and the results were also in agreement with molecular dynamics simulations and normal mode analyses. Implications of the results for the drug-resistance engendered by mutations are discussed briefly.
Resumo:
La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.
Resumo:
Process development will be largely driven by the main equipment suppliers. The reason for this development is their ambition to supply complete plants or process systems instead of single pieces of equipment. The pulp and paper companies' interest lies in product development, as their main goal is to create winning brands and effective brand management. Design engineering companies will find their niche in detail engineering based on approved process solutions. Their development work will focus on increasing the efficiency of engineering work. Process design is a content-producing profession, which requires certain special characteristics: creativity, carefulness, the ability to work as a member of a design team according to time schedules and fluency in oral as well as written presentation. In the future, process engineers will increasingly need knowledge of chemistry as well as information and automation technology. Process engineering tools are developing rapidly. At the moment, these tools are good enough for static sizing and balancing, but dynamic simulation tools are not yet good enough for the complicated chemical reactions of pulp and paper chemistry. Dynamic simulation and virtual mill models are used as tools for training the operators. Computational fluid dynamics will certainlygain ground in process design.
Resumo:
The purpose of this study was to investigate some important features of granular flows and suspension flows by computational simulation methods. Granular materials have been considered as an independent state ofmatter because of their complex behaviors. They sometimes behave like a solid, sometimes like a fluid, and sometimes can contain both phases in equilibrium. The computer simulation of dense shear granular flows of monodisperse, spherical particles shows that the collisional model of contacts yields the coexistence of solid and fluid phases while the frictional model represents a uniform flow of fluid phase. However, a comparison between the stress signals from the simulations and experiments revealed that the collisional model would result a proper match with the experimental evidences. Although the effect of gravity is found to beimportant in sedimentation of solid part, the stick-slip behavior associated with the collisional model looks more similar to that of experiments. The mathematical formulations based on the kinetic theory have been derived for the moderatesolid volume fractions with the assumption of the homogeneity of flow. In orderto make some simulations which can provide such an ideal flow, the simulation of unbounded granular shear flows was performed. Therefore, the homogeneous flow properties could be achieved in the moderate solid volume fractions. A new algorithm, namely the nonequilibrium approach was introduced to show the features of self-diffusion in the granular flows. Using this algorithm a one way flow can beextracted from the entire flow, which not only provides a straightforward calculation of self-diffusion coefficient but also can qualitatively determine the deviation of self-diffusion from the linear law at some regions nearby the wall inbounded flows. Anyhow, the average lateral self-diffusion coefficient, which was calculated by the aforementioned method, showed a desirable agreement with thepredictions of kinetic theory formulation. In the continuation of computer simulation of shear granular flows, some numerical and theoretical investigations were carried out on mass transfer and particle interactions in particulate flows. In this context, the boundary element method and its combination with the spectral method using the special capabilities of wavelets have been introduced as theefficient numerical methods to solve the governing equations of mass transfer in particulate flows. A theoretical formulation of fluid dispersivity in suspension flows revealed that the fluid dispersivity depends upon the fluid properties and particle parameters as well as the fluid-particle and particle-particle interactions.
Resumo:
Neuronal dynamics are fundamentally constrained by the underlying structural network architecture, yet much of the details of this synaptic connectivity are still unknown even in neuronal cultures in vitro. Here we extend a previous approach based on information theory, the Generalized Transfer Entropy, to the reconstruction of connectivity of simulated neuronal networks of both excitatory and inhibitory neurons. We show that, due to the model-free nature of the developed measure, both kinds of connections can be reliably inferred if the average firing rate between synchronous burst events exceeds a small minimum frequency. Furthermore, we suggest, based on systematic simulations, that even lower spontaneous inter-burst rates could be raised to meet the requirements of our reconstruction algorithm by applying a weak spatially homogeneous stimulation to the entire network. By combining multiple recordings of the same in silico network before and after pharmacologically blocking inhibitory synaptic transmission, we show then how it becomes possible to infer with high confidence the excitatory or inhibitory nature of each individual neuron.
Resumo:
It has been convincingly argued that computer simulation modeling differs from traditional science. If we understand simulation modeling as a new way of doing science, the manner in which scientists learn about the world through models must also be considered differently. This article examines how researchers learn about environmental processes through computer simulation modeling. Suggesting a conceptual framework anchored in a performative philosophical approach, we examine two modeling projects undertaken by research teams in England, both aiming to inform flood risk management. One of the modeling teams operated in the research wing of a consultancy firm, the other were university scientists taking part in an interdisciplinary project experimenting with public engagement. We found that in the first context the use of standardized software was critical to the process of improvisation, the obstacles emerging in the process concerned data and were resolved through exploiting affordances for generating, organizing, and combining scientific information in new ways. In the second context, an environmental competency group, obstacles were related to the computer program and affordances emerged in the combination of experience-based knowledge with the scientists' skill enabling a reconfiguration of the mathematical structure of the model, allowing the group to learn about local flooding.
Resumo:
In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics.
Resumo:
Fluid mixing in mechanically agitated tanks is one of the major unit operations in many industries. Bubbly flows have been of interest among researchers in physics, medicine, chemistry and technology over the centuries. The aim of this thesis is to use advanced numerical methods for simulating microbubble in an aerated mixing tank. Main components of the mixing tank are a cylindrical vessel, a rotating Rushton turbine and the air nozzle. The objective of Computational Fluid Dynamics (CFD) is to predict fluid flow, heat transfer, mass transfer and chemical reactions. The CFD simulations of a turbulent bubbly flow are carried out in a cylindrical mixing tank using large eddy simulation (LES) and volume of fluid (VOF) method. The Rushton turbine induced flow is modeled by using a sliding mesh method. Numerical results are used to describe the bubbly flows in highly complex liquid flow. Some of the experimental works related to turbulent bubbly flow in a mixing tank are briefly reported. Numerical simulations are needed to complete and interpret the results of the experimental work. Information given by numerical simulations has a major role in designing and scaling-up mixing tanks. The results of this work have been reported in the following scientific articles: ·Honkanen M., Koohestany A., Hatunen T., Saarenrinne P., Zamankhan P., Large eddy simulations and PIV experiments of a two-phase air-water mixer, in Proceedings of ASME Fluids Engineering Summer Conference (2005). ·Honkanen M., Koohestany A., Hatunen T., Saarenrinne P., Zamankhan P., Dynamical States of Bubbling in an Aerated Stirring Tank, submitted to J. Computational Physics.
Resumo:
Language extinction as a consequence of language shifts is a widespread social phenomenon that affects several million people all over the world today. An important task for social sciences research should therefore be to gain an understanding of language shifts, especially as a way of forecasting the extinction or survival of threatened languages, i.e., determining whether or not the subordinate language will survive in communities with a dominant and a subordinate language. In general, modeling is usually a very difficult task in the social sciences, particularly when it comes to forecasting the values of variables. However, the cellular automata theory can help us overcome this traditional difficulty. The purpose of this article is to investigate language shifts in the speech behavior of individuals using the methodology of the cellular automata theory. The findings on the dynamics of social impacts in the field of social psychology and the empirical data from language surveys on the use of Catalan in Valencia allowed us to define a cellular automaton and carry out a set of simulations using that automaton. The simulation results highlighted the key factors in the progression or reversal of a language shift and the use of these factors allowed us to forecast the future of a threatened language in a bilingual community.
Resumo:
The aim of this thesis is to study the mixing of fuel and, also to some extent, the mixing of air in a circulating fluidized bed boiler. In the literature survey part of this thesis, a review is made of the previous experimental studies related to the fuel and air mixing in the circulating fluidized beds. In the simulation part of it the commercial computational fluid dynamics software (FLUENT) is used with the Eulerian multiphase model for studying the fuel mixing in the two and three-dimensional furnace geometries. The results of the three-dimensional simulations are promising and, therefore suggestions are made for the future simulations. The two-dimensional studies give new information of the effects of the fluidization velocity, fuel particle size and fuel density on the fuel mixing. However, the present results show that three-dimensional models produce more realistic representation of the circulating fluidized bed behavior.
Resumo:
Children who sustain a prenatal or perinatal brain injury in the form of a stroke develop remarkably normal cognitive functions in certain areas, with a particular strength in language skills. A dominant explanation for this is that brain regions from the contralesional hemisphere "take over" their functions, whereas the damaged areas and other ipsilesional regions play much less of a role. However, it is difficult to tease apart whether changes in neural activity after early brain injury are due to damage caused by the lesion or by processes related to postinjury reorganization. We sought to differentiate between these two causes by investigating the functional connectivity (FC) of brain areas during the resting state in human children with early brain injury using a computational model. We simulated a large-scale network consisting of realistic models of local brain areas coupled through anatomical connectivity information of healthy and injured participants. We then compared the resulting simulated FC values of healthy and injured participants with the empirical ones. We found that the empirical connectivity values, especially of the damaged areas, correlated better with simulated values of a healthy brain than those of an injured brain. This result indicates that the structural damage caused by an early brain injury is unlikely to have an adverse and sustained impact on the functional connections, albeit during the resting state, of damaged areas. Therefore, these areas could continue to play a role in the development of near-normal function in certain domains such as language in these children.
Resumo:
We present computer simulations of a simple bead-spring model for polymer melts with intramolecular barriers. By systematically tuning the strength of the barriers, we investigate their role on the glass transition. Dynamic observables are analyzed within the framework of the mode coupling theory (MCT). Critical nonergodicity parameters, critical temperatures, and dynamic exponents are obtained from consistent fits of simulation data to MCT asymptotic laws. The so-obtained MCT λ-exponent increases from standard values for fully flexible chains to values close to the upper limit for stiff chains. In analogy with systems exhibiting higher-order MCT transitions, we suggest that the observed large λ-values arise form the interplay between two distinct mechanisms for dynamic arrest: general packing effects and polymer-specific intramolecular barriers. We compare simulation results with numerical solutions of the MCT equations for polymer systems, within the polymer reference interaction site model (PRISM) for static correlations. We verify that the approximations introduced by the PRISM are fulfilled by simulations, with the same quality for all the range of investigated barrier strength. The numerical solutions reproduce the qualitative trends of simulations for the dependence of the nonergodicity parameters and critical temperatures on the barrier strength. In particular, the increase in the barrier strength at fixed density increases the localization length and the critical temperature. However the qualitative agreement between theory and simulation breaks in the limit of stiff chains. We discuss the possible origin of this feature.
Resumo:
Huonetilojen lämpöolosuhteiden hallinta on tärkeä osa talotekniikan suunnittelua. Tavallisesti huonetilan lämpöolosuhteita mallinnetaan menetelmillä, joissa lämpödynamiikkaa lasketaan huoneilmassa yhdessä laskentapisteessä ja rakenteissa seinäkohtaisesti. Tarkastelun kohteena on yleensä vain huoneilman lämpötila. Tämän diplomityön tavoitteena oli kehittää huoneilman lämpöolosuhteiden simulointimalli, jossa rakenteiden lämpödynamiikka lasketaan epästationaarisesti energia-analyysilaskennalla ja huoneilman virtauskenttä mallinnetaan valittuna ajanhetkenä stationaarisesti virtauslaskennalla. Tällöin virtauskentälle saadaan jakaumat suunnittelun kannalta olennaisista suureista, joita tyypillisesti ovat esimerkiksi ilman lämpötila ja nopeus. Simulointimallin laskentatuloksia verrattiin testihuonetiloissa tehtyihin mittauksiin. Tulokset osoittautuivat riittävän tarkoiksi talotekniikan suunnitteluun. Mallilla simuloitiin kaksi huonetilaa, joissa tarvittiin tavallista tarkempaa mallinnusta. Vertailulaskelmia tehtiin eri turbulenssimalleilla, diskretointitarkkuuksilla ja hilatiheyksillä. Simulointitulosten havainnollistamiseksi suunniteltiin asiakastuloste, jossa on esitetty suunnittelun kannalta olennaiset asiat. Simulointimallilla saatiin lisätietoa varsinkin lämpötilakerrostumista, joita tyypillisesti on arvioitu kokemukseen perustuen. Simulointimallin kehityksen taustana käsiteltiin rakennusten sisäilmastoa, lämpöolosuhteita ja laskentamenetelmiä sekä mallinnukseen soveltuvia kaupallisia ohjelmia. Simulointimallilla saadaan entistä tarkempaa ja yksityiskohtaisempaa tietoa lämpöolosuhteiden hallinnan suunnitteluun. Mallin käytön ongelmia ovat vielä virtauslaskennan suuri laskenta-aika, turbulenssin mallinnus, tuloilmalaitteiden reunaehtojen tarkka määritys ja laskennan konvergointi. Kehitetty simulointimalli tarjoaa hyvän perustan virtauslaskenta- ja energia-analyysiohjelmien kehittämiseksi ja yhdistämiseksi käyttäjäystävälliseksi talotekniikan suunnittelutyökaluksi.