970 resultados para turbulence modelling theory
Resumo:
In der vorliegenden Arbeit werden 52 Verbindungen beschrieben, welche auf COX/LOX-Inhibition mit zusätzlichen Hydroxylradikalfängereigenschaften getestet worden sind. rnEs war möglich eine neue Synthesestrategie für noch nicht beschriebene 4,5-Diarylisoselenazole zu entwickeln und eine vorhandene Synthese für Isothiazoliumchloride von zwei Stufen, mit mäßigen Ausbeuten, auf eine Stufe, mit hoher Ausbeute, zu verkürzen.rnEs wurden mehrere COX-Inhibitoren identifiziert. MSD4a, MSD4h, MSD5a und MSD5h konnten als COX-1-, COX-2- und 5-LOX-Hemmer identifiziert werden. Besonders hervorzuheben ist die Verbindung MSD5h, die zusätzlich zur COX-1-, COX-2- und 5-LOX-Inhibition eine leichte Hemmung im Hydroxylradikalfänger-Assay zeigt, für die ein clog P-Wert von 2,65 berechnet wurde und die im XTT-Zytotoxizitätstestsystem, selbst bei einer Konzentration von 100 µM, kaum toxische Eigenschaften besitzt.rnWeiterhin war es möglich zu zeigen, dass Carbonsäuren gute Hydroxylradikalfängereigenschaften in unserem, auf der Fenton-Reaktion basierenden, Testsystem haben. Die Potenz der Carbonsäuren MSD8b und MSD11j im Vergleich zu den unwirksamen korrespondierenden Ester MSD8a und MSD11i führte zu Untersuchungen mit weiteren Carbonsäuren und deren Ester. Um den Wirkungsmechanismus zu erforschen wurde das Testsystem modifiziert, um eine Komplexierung der Eisenionen durch die Carbonsäuren auszuschließen. An Hand der Substanzen MSD8b und MSD11j wurde nachgewiesen, dass diese mit dem Hydroxylradikal reagieren, ohne zu decarboxylieren oder andere Zerfallsreaktionen einzugehen.rnZusätzlich zu den Untersuchungen der Enzym-Inhibition sowie des Hydroxylradikal-Scavenings wurden Molecular Modelling Studien durchgeführt. Die Ergebnisse der Dockingstudien in COX-1- (1eqg), COX-2- (1cx2) und in COX-1 mutierte COX-2-Kristallstrukturen (1cx2) führen zu einer kritischen Bewertung des folgenden Ansatzes: Es ist nicht unbedingt sinnvoll zuerst Strukturen mit dem Computer zu entwerfen und zu modeln und sie erst dann zu synthetisieren und in Enzym- oder Zellassays zu testen. Die Begründung dafür liegt in der Schwierigkeit einschätzen zu können, wie nah das gewählte Modell der Wirklichkeit ist. In den durchgeführten Dockingstudien konnte der sehr große Einfluss des kokristallisierten Liganden in der als Grundlage dienenden Kristallstruktur auf die Dockingergebnisse gezeigt werden. Durch einen zu kleinen kokristallisierten Liganden in der COX-1-Bindungstasche wurden als Ergebnis der Dockingstudie alle Verbindungen als nicht potent eingestuft, obwohl diese zum Teil im Enzymtestsystem wirksam waren. Dies konnte mit den Mutationsversuchen ausgeglichen werden. rnDeshalb kann man aus diesen Ergebnissen als Fazit ziehen, dass eine Strategie, Strukturen zu synthetisieren, in vitro zu testen und dabei die Strukturentwicklung mit Molecular Modelling Studien zu unterstützen, die Methode der Wahl darstellt.rn
Resumo:
Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.
Resumo:
I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.
Resumo:
Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.
Resumo:
Numerical modelling was performed to study the dynamics of multilayer detachment folding and salt tectonics. In the case of multilayer detachment folding, analytically derived diagrams show several folding modes, half of which are applicable to crustal scale folding. 3D numerical simulations are in agreement with 2D predictions, yet fold interactions result in complex fold patterns. Pre-existing salt diapirs change folding patterns as they localize the initial deformation. If diapir spacing is much smaller than the dominant folding wavelength, diapirs appear in fold synclines or limbs.rnNumerical models of 3D down-building diapirism show that sedimentation rate controls whether diapirs will form and influences the overall patterns of diapirism. Numerical codes were used to retrodeform modelled salt diapirs. Reverse modelling can retrieve the initial geometries of a 2D Rayleigh-Taylor instability with non-linear rheologies. Although intermediate geometries of down-built diapirs are retrieved, forward and reverse modelling solutions deviate. rnFinally, the dynamics of fold-and-thrusts belts formed over a tilted viscous detachment is studied and it is demonstrated that mechanical stratigraphy has an impact on the deformation style, switching from thrust- to folding-dominated. The basal angle of the detachment controls the deformation sequence of the fold-and-thrust belt and results are consistent with critical wedge theory.rn
Resumo:
We present studies of the spatial clustering of inertial particles embedded in turbulent flow. A major part of the thesis is experimental, involving the technique of Phase Doppler Interferometry (PDI). The thesis also includes significant amount of simulation studies and some theoretical considerations. We describe the details of PDI and explain why it is suitable for study of particle clustering in turbulent flow with a strong mean velocity. We introduce the concept of the radial distribution function (RDF) as our chosen way of quantifying inertial particle clustering and present some original works on foundational and practical considerations related to it. These include methods of treating finite sampling size, interpretation of the magnitude of RDF and the possibility of isolating RDF signature of inertial clustering from that of large scale mixing. In experimental work, we used the PDI to observe clustering of water droplets in a turbulent wind tunnel. From that we present, in the form of a published paper, evidence of dynamical similarity (Stokes number similarity) of inertial particle clustering together with other results in qualitative agreement with available theoretical prediction and simulation results. We next show detailed quantitative comparisons of results from our experiments, direct-numerical-simulation (DNS) and theory. Very promising agreement was found for like-sized particles (mono-disperse). Theory is found to be incorrect regarding clustering of different-sized particles and we propose a empirical correction based on the DNS and experimental results. Besides this, we also discovered a few interesting characteristics of inertial clustering. Firstly, through observations, we found an intriguing possibility for modeling the RDF arising from inertial clustering that has only one (sensitive) parameter. We also found that clustering becomes saturated at high Reynolds number.
Resumo:
Stochastic models for three-dimensional particles have many applications in applied sciences. Lévy–based particle models are a flexible approach to particle modelling. The structure of the random particles is given by a kernel smoothing of a Lévy basis. The models are easy to simulate but statistical inference procedures have not yet received much attention in the literature. The kernel is not always identifiable and we suggest one approach to remedy this problem. We propose a method to draw inference about the kernel from data often used in local stereology and study the performance of our approach in a simulation study.
Resumo:
BACKGROUND Compliance with surgical checklist use remains an obstacle in the context of checklist implementation programs. The theory of planned behaviour was applied to analyse attitudes, perceived behaviour control, and norms as psychological antecedents of individuals' intentions to use the checklist. METHODS A cross-sectional survey study with staff (N = 866) of 10 Swiss hospitals was conducted in German and French. Group mean differences between individuals with and without managerial function were computed. Structural equation modelling and confirmatory factor analysis was applied to investigate the structural relation between attitudes, perceived behaviour control, norms, and intentions. RESULTS Significant mean differences in favour of individuals with managerial function emerged for norms, perceived behavioural control, and intentions, but not for attitudes. Attitudes and perceived behavioural control had a significant direct effect on intentions whereas norms had not. CONCLUSIONS Individuals with managerial function exhibit stronger perceived behavioural control, stronger norms, and stronger intentions. This could be applied in facilitating checklist implementation. The structural model of the theory of planned behaviour remains stable across groups, indicating a valid model to describe antecedents of intentions in the context of surgical checklist implementation.
Resumo:
The evolution of porosity due to dissolution/precipitation processes of minerals and the associated change of transport parameters are of major interest for natural geological environments and engineered underground structures. We designed a reproducible and fast to conduct 2D experiment, which is flexible enough to investigate several process couplings implemented in the numerical code OpenGeosys-GEM (OGS-GEM). We investigated advective-diffusive transport of solutes, effect of liquid phase density on advective transport, and kinetically controlled dissolution/precipitation reactions causing porosity changes. In addition, the system allowed to investigate the influence of microscopic (pore scale) processes on macroscopic (continuum scale) transport. A Plexiglas tank of dimension 10 × 10 cm was filled with a 1 cm thick reactive layer consisting of a bimodal grain size distribution of celestite (SrSO4) crystals, sandwiched between two layers of sand. A barium chloride solution was injected into the tank causing an asymmetric flow field to develop. As the barium chloride reached the celestite region, dissolution of celestite was initiated and barite precipitated. Due to the higher molar volume of barite, its precipitation caused a porosity decrease and thus also a decrease in the permeability of the porous medium. The change of flow in space and time was observed via injection of conservative tracers and analysis of effluents. In addition, an extensive post-mortem analysis of the reacted medium was conducted. We could successfully model the flow (with and without fluid density effects) and the transport of conservative tracers with a (continuum scale) reactive transport model. The prediction of the reactive experiments initially failed. Only the inclusion of information from post-mortem analysis gave a satisfactory match for the case where the flow field changed due to dissolution/precipitation reactions. We concentrated on the refinement of post-mortem analysis and the investigation of the dissolution/precipitation mechanisms at the pore scale. Our analytical techniques combined scanning electron microscopy (SEM) and synchrotron X-ray micro-diffraction/micro-fluorescence performed at the XAS beamline (Swiss Light Source). The newly formed phases include an epitaxial growth of barite micro-crystals on large celestite crystals (epitaxial growth) and a nano-crystalline barite phase (resulting from the dissolution of small celestite crystals) with residues of celestite crystals in the pore interstices. Classical nucleation theory, using well-established and estimated parameters describing barite precipitation, was applied to explain the mineralogical changes occurring in our system. Our pore scale investigation showed limits of the continuum scale reactive transport model. Although kinetic effects were implemented by fixing two distinct rates for the dissolution of large and small celestite crystals, instantaneous precipitation of barite was assumed as soon as oversaturation occurred. Precipitation kinetics, passivation of large celestite crystals and metastability of supersaturated solutions, i.e. the conditions under which nucleation cannot occur despite high supersaturation, were neglected. These results will be used to develop a pore scale model that describes precipitation and dissolution of crystals at the pore scale for various transport and chemical conditions. Pore scale modelling can be used to parameterize constitutive equations to introduce pore-scale corrections into macroscopic (continuum) reactive transport models. Microscopic understanding of the system is fundamental for modelling from the pore to the continuum scale.
Resumo:
Los arrays de ranuras son sistemas de antennas conocidos desde los años 40, principalmente destinados a formar parte de sistemas rádar de navíos de combate y grandes estaciones terrenas donde el tamaño y el peso no eran altamente restrictivos. Con el paso de los años y debido sobre todo a importantes avances en materiales y métodos de fabricación, el rango de aplicaciones de este tipo de sistemas radiantes creció en gran medida. Desde nuevas tecnologías biomédicas, sistemas anticolisión en automóviles y navegación en aviones, enlaces de comunicaciones de alta tasa binaria y corta distancia e incluso sistemas embarcados en satélites para la transmisión de señal de televisión. Dentro de esta familia de antennas, existen dos grupos que destacan por ser los más utilizados: las antennas de placas paralelas con las ranuras distribuidas de forma circular o espiral y las agrupaciones de arrays lineales construidos sobre guia de onda. Continuando con las tareas de investigación desarrolladas durante los últimos años en el Instituto de Tecnología de Tokyo y en el Grupo de Radiación de la Universidad Politécnica de Madrid, la totalidad de esta tesis se centra en este último grupo, aunque como se verá se separa en gran medida de las técnicas de diseño y metodologías convencionales. Los arrays de ranuras rectas y paralelas al eje de la guía rectangular que las alimenta son, sin ninguna duda, los modelos más empleados debido a la fiabilidad que presentan a altas frecuencias, su capacidad para gestionar grandes cantidades de potencia y la sencillez de su diseño y fabricación. Sin embargo, también presentan desventajas como estrecho ancho de banda en pérdidas de retorno y rápida degradación del diagrama de radiación con la frecuencia. Éstas son debidas a la naturaleza resonante de sus elementos radiantes: al perder la resonancia, el sistema global se desajusta y sus prestaciones degeneran. En arrays bidimensionales de slots rectos, el campo eléctrico queda polarizado sobre el plano transversal a las ranuras, correspondiéndose con el plano de altos lóbulos secundarios. Esta tesis tiene como objetivo el desarrollo de un método sistemático de diseño de arrays de ranuras inclinadas y desplazadas del centro (en lo sucesivo “ranuras compuestas”), definido en 1971 como uno de los desafíos a superar dentro del mundo del diseño de antennas. La técnica empleada se basa en el Método de los Momentos, la Teoría de Circuitos y la Teoría de Conexión Aleatoria de Matrices de Dispersión. Al tratarse de un método circuital, la primera parte de la tesis se corresponde con el estudio de la aplicabilidad de las redes equivalentes fundamentales, su capacidad para recrear fenómenos físicos de la ranura, las limitaciones y ventajas que presentan para caracterizar las diferentes configuraciones de slot compuesto. Se profundiza en las diferencias entre las redes en T y en ! y se condiciona la selección de una u otra dependiendo del tipo de elemento radiante. Una vez seleccionado el tipo de red a emplear en el diseño del sistema, se ha desarrollado un algoritmo de cascadeo progresivo desde el puerto alimentador hacia el cortocircuito que termina el modelo. Este algoritmo es independiente del número de elementos, la frecuencia central de funcionamiento, del ángulo de inclinación de las ranuras y de la red equivalente seleccionada (en T o en !). Se basa en definir el diseño del array como un Problema de Satisfacción de Condiciones (en inglés, Constraint Satisfaction Problem) que se resuelve por un método de Búsqueda en Retroceso (Backtracking algorithm). Como resultado devuelve un circuito equivalente del array completo adaptado a su entrada y cuyos elementos consumen una potencia acorde a una distribución de amplitud dada para el array. En toda agrupación de antennas, el acoplo mutuo entre elementos a través del campo radiado representa uno de los principales problemas para el ingeniero y sus efectos perjudican a las prestaciones globales del sistema, tanto en adaptación como en capacidad de radiación. El empleo de circuito equivalente se descartó por la dificultad que suponía la caracterización de estos efectos y su inclusión en la etapa de diseño. En esta tesis doctoral el acoplo también se ha modelado como una red equivalente cuyos elementos son transformadores ideales y admitancias, conectada al conjunto de redes equivalentes que representa el array. Al comparar los resultados estimados en términos de pérdidas de retorno y radiación con aquellos obtenidos a partir de programas comerciales populares como CST Microwave Studio se confirma la validez del método aquí propuesto, el primer método de diseño sistemático de arrays de ranuras compuestos alimentados por guía de onda rectangular. Al tratarse de ranuras no resonantes, el ancho de banda en pérdidas de retorno es mucho mas amplio que el que presentan arrays de slots rectos. Para arrays bidimensionales, el ángulo de inclinación puede ajustarse de manera que el campo quede polarizado en los planos de bajos lóbulos secundarios. Además de simulaciones se han diseñado, construido y medido dos prototipos centrados en la frecuencia de 12GHz, de seis y diez elementos. Las medidas de pérdidas de retorno y diagrama de radiación revelan excelentes resultados, certificando la bondad del método genuino Method of Moments - Forward Matching Procedure desarrollado a lo largo de esta tésis. Abstract The slot antenna arrays are well known systems from the decade of 40s, mainly intended to be part of radar systems of large warships and terrestrial stations where size and weight were not highly restrictive. Over the years, mainly due to significant advances in materials and manufacturing methods, the range of applications of this type of radiating systems grew significantly. From new biomedical technologies, collision avoidance systems in cars and aircraft navigation, short communication links with high bit transfer rate and even embedded systems in satellites for television broadcast. Within this family of antennas, two groups stand out as being the most frequent in the literature: parallel plate antennas with slots placed in a circular or spiral distribution and clusters of waveguide linear arrays. To continue the vast research work carried out during the last decades in the Tokyo Institute of Technology and in the Radiation Group at the Universidad Politécnica de Madrid, this thesis focuses on the latter group, although it represents a technique that drastically breaks with traditional design methodologies. The arrays of slots straight and parallel to the axis of the feeding rectangular waveguide are without a doubt the most used models because of the reliability that they present at high frequencies, its ability to handle large amounts of power and their simplicity of design and manufacturing. However, there also exist disadvantages as narrow bandwidth in return loss and rapid degradation of the radiation pattern with frequency. These are due to the resonant nature of radiating elements: away from the resonance status, the overall system performance and radiation pattern diminish. For two-dimensional arrays of straight slots, the electric field is polarized transverse to the radiators, corresponding to the plane of high side-lobe level. This thesis aims to develop a systematic method of designing arrays of angled and displaced slots (hereinafter "compound slots"), defined in 1971 as one of the challenges to overcome in the world of antenna design. The used technique is based on the Method of Moments, Circuit Theory and the Theory of Scattering Matrices Connection. Being a circuitry-based method, the first part of this dissertation corresponds to the study of the applicability of the basic equivalent networks, their ability to recreate the slot physical phenomena, their limitations and advantages presented to characterize different compound slot configurations. It delves into the differences of T and ! and determines the selection of the most suitable one depending on the type of radiating element. Once the type of network to be used in the system design is selected, a progressive algorithm called Forward Matching Procedure has been developed to connect the proper equivalent networks from the feeder port to shorted ending. This algorithm is independent of the number of elements, the central operating frequency, the angle of inclination of the slots and selected equivalent network (T or ! networks). It is based on the definition of the array design as a Constraint Satisfaction Problem, solved by means of a Backtracking Algorithm. As a result, the method returns an equivalent circuit of the whole array which is matched at its input port and whose elements consume a power according to a given amplitude distribution for the array. In any group of antennas, the mutual coupling between elements through the radiated field represents one of the biggest problems that the engineer faces and its effects are detrimental to the overall performance of the system, both in radiation capabilities and return loss. The employment of an equivalent circuit for the array design was discarded by some authors because of the difficulty involved in the characterization of the coupling effects and their inclusion in the design stage. In this thesis the coupling has also been modeled as an equivalent network whose elements are ideal transformers and admittances connected to the set of equivalent networks that represent the antennas of the array. By comparing the estimated results in terms of return loss and radiation with those obtained from popular commercial software as CST Microwave Studio, the validity of the proposed method is fully confirmed, representing the first method of systematic design of compound-slot arrays fed by rectangular waveguide. Since these slots do not work under the resonant status, the bandwidth in return loss is much wider than the longitudinal-slot arrays. For the case of two-dimensional arrays, the angle of inclination can be adjusted so that the field is polarized at the low side-lobe level plane. Besides the performed full-wave simulations two prototypes of six and ten elements for the X-band have been designed, built and measured, revealing excellent results and agreement with the expected results. These facts certify that the genuine technique Method of Moments - Matching Forward Procedure developed along this thesis is valid and trustable.
Resumo:
Artículo sobre comunicaciones ferroviarias. Abstract: Along with the increase in operating frequencies in advanced radio communication systems utilised inside tunnels, the location of the break point is further and further away from the transmitter. This means that the near region lengthens considerably and even occupies the whole propagation cell or the entire length of some short tunnels. To begin with, this study analyses the propagation loss resulting from the free-space mechanism and the multi-mode waveguide mechanism in the near region of circular tunnels, respectively. Then, by conjunctive employing the propagation theory and the three-dimensional solid geometry, a general analytical model of the dividing point between two propagation mechanisms is presented for the first time. Moreover, the model is validated by a wide range of measurement campaigns in different tunnels at different frequencies. Finally, discussions on the simplified formulae of the dividing point in some application situations are made. The results in this study can be helpful to grasp the essence of the propagation mechanism inside tunnels.
Resumo:
Modelling of entire wind farms in flat and complex terrain using a full 3D Navier–Stokes solver for incompressible flow is presented in this paper. Numerical integration of the governing equations is performed using an implicit pressure correction scheme, where the wind turbines (W/Ts) are modelled as momentum absorbers through their thrust coefficient. The k–ω turbulence model, suitably modified for atmospheric flows, is employed for closure. A correction is introduced to account for the underestimation of the near wake deficit, in which the turbulence time scale is bounded using a general “realizability” constraint for the fluctuating velocities. The second modelling issue that is discussed in this paper is related to the determination of the reference wind speed for the thrust calculation of the machines. Dealing with large wind farms and wind farms in complex terrain, determining the reference wind speed is not obvious when a W/T operates in the wake of another WT and/or in complex terrain. Two alternatives are compared: using the wind speed value at hub height one diameter upstream of the W/T and adopting an induction factor-based concept to overcome the utilization of a wind speed at a certain distance upwind of the rotor. Application is made in two wind farms, a five-machine one located in flat terrain and a 43-machine one located in complex terrain.
Resumo:
Computational fluid dynamic (CFD) methods are used in this paper to predict the power production from entire wind farms in complex terrain and to shed some light into the wake flow patterns. Two full three-dimensional Navier–Stokes solvers for incompressible fluid flow, employing k − ϵ and k − ω turbulence closures, are used. The wind turbines are modeled as momentum absorbers by means of their thrust coefficient through the actuator disk approach. Alternative methods for estimating the reference wind speed in the calculation of the thrust are tested. The work presented in this paper is part of the work being undertaken within the UpWind Integrated Project that aims to develop the design tools for next generation of large wind turbines. In this part of UpWind, the performance of wind farm and wake models is being examined in complex terrain environment where there are few pre-existing relevant measurements. The focus of the work being carried out is to evaluate the performance of CFD models in large wind farm applications in complex terrain and to examine the development of the wakes in a complex terrain environment.
Resumo:
Innovation studies have been interest of not only the scholars from various fields such as economics, management and sociology but also industrial practitioners and policy makers. In this vast and fruitful field, the theory of diffusion of innovations, which has been driven by a sociological approach, has played a vital role in our understanding of the mechanisms behind industrial change. In this paper, our aim is to give a state of art review of diffusion of innovation models in a structural and conceptual way with special reference to photovoltaic. We argue firstly, as an underlying background, how diffusion of innovations theory differs from other innovation studies. Secondly we give a brief taxonomical review of modelling methodologies together with comparative discussions. And finally we put the wealth of modelling in the context of photovoltaic diffusion and suggest some future directions.
Resumo:
Momentum, mass and energy balance laws provide the tools for the study of the evolution of an icefield covering a subglacial lake. The ice is described as a non-Newtonian fluid with a power-law constitutive relationship with temperature- and stress-dependent viscosity (Glen?s law) [1]. The phase transition mechanisms at the air/ice and ice/water interfaces yield moving boundary formulations, and lake hydrodynamics requires equation reduction for treating the turbulence.