949 resultados para Subgrid Scale Model
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Hidráulica
Resumo:
Dissertação de Natureza Científica elaborada no Laboratório Nacional de Engenharia Civil (LNEC) para obtenção do grau de mestre em Engenharia Civil na Área de Especialização de Hidráulica no âmbito do protocolo de cooperação entre o ISEL e o LNEC
Resumo:
To investigate the thennal effects of latent heat in hydrothennal settings, an extension was made to the existing finite-element numerical modelling software, Aquarius. The latent heat algorithm was validated using a series of column models, which analysed the effects of penneability (flow rate), thennal gradient, and position along the two-phase curve (pressure). Increasing the flow rate and pressure increases displacement of the liquid-steam boundary from an initial position detennined without accounting for latent heat while increasing the thennal gradient decreases that displacement. Application to a regional scale model of a caldera-hosted hydrothennal system based on a representative suite of calderas (e.g., Yellowstone, Creede, Valles Grande) led to oscillations in the model solution. Oscillations can be reduced or eliminated by mesh refinement, which requires greater computation effort. Results indicate that latent heat should be accounted for to accurately model phase change conditions in hydrothennal settings.
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
Il est généralement admis que la vision joue un rôle prépondérant dans la formation des représentations spatiales. Qu’advient-il alors lorsqu’un individu est atteint de cécité? Dans le premier volet de cette thèse, les habiletés spatiales des personnes aveugles ont été examinées à l’aide de différentes tâches et comparées à celles de personnes voyantes effectuant les mêmes tâches avec les yeux bandés. Dans une première étude, les capacités de rotation mentale ont été évaluées à l’aide d’une épreuve d’orientation topographique tactile. Les résultats obtenus montrent que les personnes aveugles parviennent généralement à développer des capacités de rotation mentale similaires à celles de personnes voyantes et ce, malgré l’absence d’information de nature visuelle. Dans une seconde étude, nous avons utilisé différentes tâches spatiales nécessitant l’utilisation de la locomotion. Les résultats obtenus montrent que les personnes aveugles font preuve d’habiletés supérieures à celles de voyants lorsqu’elles doivent apprendre de nouveaux trajets dans un labyrinthe. Elles parviennent également à mieux reconnaître une maquette représentant un environnement exploré précédemment. Ainsi, l’absence de vision ne semble pas entraver de manière significative la formation de concepts spatiaux. Le second volet de cette thèse s’inscrit dans la lignée des études sur la plasticité cérébrale chez les personnes aveugles. Dans le cas présent, nous nous sommes intéressés à l’hippocampe, une structure profonde du lobe temporal dont le rôle au plan spatial a été établi par de nombreuses études animales ainsi que par des études cliniques chez l’humain incluant l’imagerie cérébrale. L’hippocampe joue un rôle particulièrement important dans la navigation spatiale. De plus, des changements structuraux de l’hippocampe ont été documentés en relation avec l’expérience des individus. Par exemple, l’étude de Maguire et al. (2000) a mis en évidence de telles modifications structurelles de l’hippocampe chez des chauffeurs de taxi. À l’instar de ces derniers, les personnes aveugles doivent emmagasiner de nombreuses informations au sujet de leur environnement puisqu’elles ne peuvent bénéficier de la vision pour mettre à jour les informations sur celui-ci, sur leur position dans l’espace et sur la position des objets se trouvant hors de leur portée. Nous avons montré, pour la première fois, une augmentation du volume des hippocampes chez les personnes aveugles en comparaison avec les personnes voyantes. De plus, cette augmentation de volume était positivement corrélée à la performance à une tâche d’apprentissage de trajets. Les résultats présentés dans cette thèse permettent d’appuyer les études antérieures qui soutiennent que les personnes aveugles parviennent à compenser leur déficit et à développer des habiletés spatiales comparables, voire supérieures, à celles de personnes voyantes. Ils permettent également d’apporter un éclairage nouveau sur le concept de plasticité cérébrale présent chez cette population en montrant pour la première fois un lien entre le volume de l’hippocampe et les habiletés spatiales chez les personnes aveugles.
Resumo:
Gabion faced re.taining walls are essentially semi rigid structures that can generally accommodate large lateral and vertical movements without excessive structural distress. Because of this inherent feature, they offer technical and economical advantage over the conventional concrete gravity retaining walls. Although they can be constructed either as gravity type or reinforced soil type, this work mainly deals with gabion faced reinforced earth walls as they are more suitable to larger heights. The main focus of the present investigation was the development of a viable plane strain two dimensional non linear finite element analysis code which can predict the stress - strain behaviour of gabion faced retaining walls - both gravity type and reinforced soil type. The gabion facing, backfill soil, In - situ soil and foundation soil were modelled using 20 four noded isoparametric quadrilateral elements. The confinement provided by the gabion boxes was converted into an induced apparent cohesion as per the membrane correction theory proposed by Henkel and Gilbert (1952). The mesh reinforcement was modelled using 20 two noded linear truss elements. The interactions between the soil and the mesh reinforcement as well as the facing and backfill were modelled using 20 four noded zero thickness line interface elements (Desai et al., 1974) by incorporating the nonlinear hyperbolic formulation for the tangential shear stiffness. The well known hyperbolic formulation by Ouncan and Chang (1970) was used for modelling the non - linearity of the soil matrix. The failure of soil matrix, gabion facing and the interfaces were modelled using Mohr - Coulomb failure criterion. The construction stages were also modelled.Experimental investigations were conducted on small scale model walls (both in field as well as in laboratory) to suggest an alternative fill material for the gabion faced retaining walls. The same were also used to validate the finite element programme developed as a part of the study. The studies were conducted using different types of gabion fill materials. The variation was achieved by placing coarse aggregate and quarry dust in different proportions as layers one above the other or they were mixed together in the required proportions. The deformation of the wall face was measured and the behaviour of the walls with the variation of fill materials was analysed. It was seen that 25% of the fill material in gabions can be replaced by a soft material (any locally available material) without affecting the deformation behaviour to large extents. In circumstances where deformation can be allowed to some extents, even up to 50% replacement with soft material can be possible.The developed finite element code was validated using experimental test results and other published results. Encouraged by the close comparison between the theory and experiments, an extensive and systematic parametric study was conducted, in order to gain a closer understanding of the behaviour of the system. Geometric parameters as well as material parameters were varied to understand their effect on the behaviour of the walls. The final phase of the study consisted of developing a simplified method for the design of gabion faced retaining walls. The design was based on the limit state method considering both the stability and deformation criteria. The design parameters were selected for the system and converted to dimensionless parameters. Thus the procedure for fixing the dimensions of the wall was simplified by eliminating the conventional trial and error procedure. Handy design charts were developed which would prove as a hands - on - tool to the design engineers at site. Economic studies were also conducted to prove the cost effectiveness of the structures with respect to the conventional RCC gravity walls and cost prediction models and cost breakdown ratios were proposed. The studies as a whole are expected to contribute substantially to understand the actual behaviour of gabion faced retaining wall systems with particular reference to the lateral deformations.
Resumo:
Im Rahmen dieser Arbeit werden Modellbildungsverfahren zur echtzeitfähigen Simulation wichtiger Schadstoffkomponenten im Abgasstrom von Verbrennungsmotoren vorgestellt. Es wird ein ganzheitlicher Entwicklungsablauf dargestellt, dessen einzelne Schritte, beginnend bei der Ver-suchsplanung über die Erstellung einer geeigneten Modellstruktur bis hin zur Modellvalidierung, detailliert beschrieben werden. Diese Methoden werden zur Nachbildung der dynamischen Emissi-onsverläufe relevanter Schadstoffe des Ottomotors angewendet. Die abgeleiteten Emissionsmodelle dienen zusammen mit einer Gesamtmotorsimulation zur Optimierung von Betriebstrategien in Hybridfahrzeugen. Im ersten Abschnitt der Arbeit wird eine systematische Vorgehensweise zur Planung und Erstellung von komplexen, dynamischen und echtzeitfähigen Modellstrukturen aufgezeigt. Es beginnt mit einer physikalisch motivierten Strukturierung, die eine geeignete Unterteilung eines Prozessmodells in einzelne überschaubare Elemente vorsieht. Diese Teilmodelle werden dann, jeweils ausgehend von einem möglichst einfachen nominalen Modellkern, schrittweise erweitert und ermöglichen zum Abschluss eine robuste Nachbildung auch komplexen, dynamischen Verhaltens bei hinreichender Genauigkeit. Da einige Teilmodelle als neuronale Netze realisiert werden, wurde eigens ein Verfah-ren zur sogenannten diskreten evidenten Interpolation (DEI) entwickelt, das beim Training einge-setzt, und bei minimaler Messdatenanzahl ein plausibles, also evidentes Verhalten experimenteller Modelle sicherstellen kann. Zum Abgleich der einzelnen Teilmodelle wurden statistische Versuchs-pläne erstellt, die sowohl mit klassischen DoE-Methoden als auch mittels einer iterativen Versuchs-planung (iDoE ) generiert wurden. Im zweiten Teil der Arbeit werden, nach Ermittlung der wichtigsten Einflussparameter, die Model-strukturen zur Nachbildung dynamischer Emissionsverläufe ausgewählter Abgaskomponenten vor-gestellt, wie unverbrannte Kohlenwasserstoffe (HC), Stickstoffmonoxid (NO) sowie Kohlenmono-xid (CO). Die vorgestellten Simulationsmodelle bilden die Schadstoffkonzentrationen eines Ver-brennungsmotors im Kaltstart sowie in der anschließenden Warmlaufphase in Echtzeit nach. Im Vergleich zur obligatorischen Nachbildung des stationären Verhaltens wird hier auch das dynami-sche Verhalten des Verbrennungsmotors in transienten Betriebsphasen ausreichend korrekt darge-stellt. Eine konsequente Anwendung der im ersten Teil der Arbeit vorgestellten Methodik erlaubt, trotz einer Vielzahl von Prozesseinflussgrößen, auch hier eine hohe Simulationsqualität und Ro-bustheit. Die Modelle der Schadstoffemissionen, eingebettet in das dynamische Gesamtmodell eines Ver-brennungsmotors, werden zur Ableitung einer optimalen Betriebsstrategie im Hybridfahrzeug ein-gesetzt. Zur Lösung solcher Optimierungsaufgaben bieten sich modellbasierte Verfahren in beson-derer Weise an, wobei insbesondere unter Verwendung dynamischer als auch kaltstartfähiger Mo-delle und der damit verbundenen Realitätsnähe eine hohe Ausgabequalität erreicht werden kann.
Resumo:
Resumen basado en el de la publicaci??n
Resumo:
La desertificació és un problema de degradació de sòls de gran importància en regions àrides, semi-àrides i sub-humides, amb serioses conseqüències ambientals, socials i econòmiques com a resultat de l'impacte d'activitats humanes en combinació amb condicions físiques i medi ambientals desfavorables (UNEP, 1994). L'objectiu principal d'aquesta tesi va ser el desenvolupament d'una metodologia simple per tal de poder avaluar de forma precisa l'estat i l'evolució de la desertificació a escala local, a través de la creació d'un model anomenat sistema d'indicators de desertificació (DIS). En aquest mateix context, un dels dos objectius específics d'aquesta recerca es va centrar en l'estudi dels factors més importants de degradació de sòls a escala de parcel.la, comportant un extens treball de camp, analisi de laboratori i la corresponent interpretació i discussió dels resultats obtinguts. El segon objectiu específic es va basar en el desenvolupament i aplicació del DIS. L'àrea d'estudi seleccionada va ser la conca de la Serra de Rodes, un ambient típic Mediterràni inclòs en el Parc Natural del Cap de Creus, NE Espanya, el qual ha estat progressivament abandonat pels agricultors durant el segle passat. Actualment, els incendis forestals així com el canvi d'ús del sòl i especialment l'abandonament de terres són considerats els problemes ambientals més importants a l'àrea d'estudi (Dunjó et al., 2003). En primer lloc, es va realitzar l'estudi dels processos i causes de la degradació dels sòls a l'àrea d'interés. En base a aquest coneixement, es va dur a terme la identificació i selecció dels indicadors de desertificació més rellevants. Finalment, els indicadors de desertificació seleccionats a escala de conca, incloent l'erosió del sòl i l'escolament superficial, es van integrar en un model espaial de procés. Ja que el sòl és considerat el principal indicador dels processos d'erosió, segons la FAO/UNEP/UNESCO (1979), tant el paisatge original així com els dos escenaris d'ús del sòl desenvolupats, un centrat en el cas hipotétic del pas d'un incendi forestal, i l'altre un paisatge completament cultivat, poden ser ambients classificats sota baixa o moderada degradació. En comparació amb l'escenari original, els dos escenaris creats van revelar uns valors més elevats d'erosió i escolament superficial, i en particular l'escenari cultivat. Per tant, aquests dos hipotètic escenaris no semblen ser una alternativa sostenible vàlida als processos de degradació que es donen a l'àrea d'estudi. No obstant, un ampli ventall d'escenaris alternatius poden ser desenvolupats amb el DIS, tinguent en compte les polítiques d'especial interés per la regió de manera que puguin contribuir a determinar les conseqüències potencials de desertificació derivades d'aquestes polítiques aplicades en aquest escenari tan complexe espaialment. En conclusió, el model desenvolupat sembla ser un sistema força acurat per la identificació de riscs presents i futurs, així com per programar efectivament mesures per combatre la desertificació a escala de conca. No obstant, aquesta primera versió del model presenta varies limitacions i la necessitat de realitzar més recerca en cas de voler desenvolupar una versió futura i millor del DIS.
Resumo:
Mixture model techniques are applied to a daily index of monsoon convection from ERA‐40 reanalysis to show regime behavior. The result is the existence of two significant regimes showing preferred locations of convection within the Asia/Western‐North Pacific domain, with some resemblance to active‐break events over India. Simple trend analysis over 1958–2001 shows that the first regime has become less frequent while the second becomes much more dominant. Both undergo a change in structure contributing to the total OLR trend over the ERA‐40 period. Stratifying the data according to a large‐scale dynamical index of monsoon interannual variability, we show the regime occurrence to be strongly perturbed by the seasonal condition, in agreement with conceptual ideas. This technique could be used to further examine predictability issues relating the seasonal mean and intraseasonal monsoon variability or to explore changes in monsoon behavior in centennial‐scale model integrations.
Resumo:
A series of scale model measurements of transverse electromagnetic mode tapered slot antennas are presented. They show that the beam launched by this type of antenna is astigmatic. It is shown how an off-axis spherical mirror can be used to correct this astigmatism to allow efficient coupling to quasi-optical systems. A millimetre wave antenna and mirror combination is described and, with the aid of solid state noise diodes, the coupling of the launched beam to a quasi-optical spectrometer is shown to be in good agreement with that predicted by the scale model measurements.
A wind-tunnel study of flow distortion at a meteorological sensor on top of the BT Tower, London, UK
Resumo:
High quality wind measurements in cities are needed for numerous applications including wind engineering. Such data-sets are rare and measurement platforms may not be optimal for meteorological observations. Two years' wind data were collected on the BT Tower, London, UK, showing an upward deflection on average for all wind directions. Wind tunnel simulations were performed to investigate flow distortion around two scale models of the Tower. Using a 1:160 scale model it was shown that the Tower causes a small deflection (ca. 0.5°) compared to the lattice on top on which the instruments were placed (ca. 0–4°). These deflections may have been underestimated due to wind tunnel blockage. Using a 1:40 model, the observed flow pattern was consistent with streamwise vortex pairs shed from the upstream lattice edge. Correction factors were derived for different wind directions and reduced deflection in the full-scale data-set by <3°. Instrumental tilt caused a sinusoidal variation in deflection of ca. 2°. The residual deflection (ca. 3°) was attributed to the Tower itself. Correction of the wind-speeds was small (average 1%) therefore it was deduced that flow distortion does not significantly affect the measured wind-speeds and the wind climate statistics are reliable.
Landscape, regional and global estimates of nitrogen flux from land to sea: errors and uncertainties
Resumo:
Regional to global scale modelling of N flux from land to ocean has progressed to date through the development of simple empirical models representing bulk N flux rates from large watersheds, regions, or continents on the basis of a limited selection of model parameters. Watershed scale N flux modelling has developed a range of physically-based approaches ranging from models where N flux rates are predicted through a physical representation of the processes involved, through to catchment scale models which provide a simplified representation of true systems behaviour. Generally, these watershed scale models describe within their structure the dominant process controls on N flux at the catchment or watershed scale, and take into account variations in the extent to which these processes control N flux rates as a function of landscape sensitivity to N cycling and export. This paper addresses the nature of the errors and uncertainties inherent in existing regional to global scale models, and the nature of error propagation associated with upscaling from small catchment to regional scale through a suite of spatial aggregation and conceptual lumping experiments conducted on a validated watershed scale model, the export coefficient model. Results from the analysis support the findings of other researchers developing macroscale models in allied research fields. Conclusions from the study confirm that reliable and accurate regional scale N flux modelling needs to take account of the heterogeneity of landscapes and the impact that this has on N cycling processes within homogenous landscape units.
Resumo:
Systematic climate shifts have been linked to multidecadal variability in observed sea surface temperatures in the North Atlantic Ocean1. These links are extensive, influencing a range of climate processes such as hurricane activity2 and African Sahel3, 4, 5 and Amazonian5 droughts. The variability is distinct from historical global-mean temperature changes and is commonly attributed to natural ocean oscillations6, 7, 8, 9, 10. A number of studies have provided evidence that aerosols can influence long-term changes in sea surface temperatures11, 12, but climate models have so far failed to reproduce these interactions6, 9 and the role of aerosols in decadal variability remains unclear. Here we use a state-of-the-art Earth system climate model to show that aerosol emissions and periods of volcanic activity explain 76 per cent of the simulated multidecadal variance in detrended 1860–2005 North Atlantic sea surface temperatures. After 1950, simulated variability is within observational estimates; our estimates for 1910–1940 capture twice the warming of previous generation models but do not explain the entire observed trend. Other processes, such as ocean circulation, may also have contributed to variability in the early twentieth century. Mechanistically, we find that inclusion of aerosol–cloud microphysical effects, which were included in few previous multimodel ensembles, dominates the magnitude (80 per cent) and the spatial pattern of the total surface aerosol forcing in the North Atlantic. Our findings suggest that anthropogenic aerosol emissions influenced a range of societally important historical climate events such as peaks in hurricane activity and Sahel drought. Decadal-scale model predictions of regional Atlantic climate will probably be improved by incorporating aerosol–cloud microphysical interactions and estimates of future concentrations of aerosols, emissions of which are directly addressable by policy actions.
Resumo:
We use new neutron scattering instrumentation to follow in a single quantitative time-resolving experiment, the three key scales of structural development which accompany the crystallisation of synthetic polymers. These length scales span 3 orders of magnitude of the scattering vector. The study of polymer crystallisation dates back to the pioneering experiments of Keller and others who discovered the chain-folded nature of the thin lamellae crystals which are normally found in synthetic polymers. The inherent connectivity of polymers makes their crystallisation a multiscale transformation. Much understanding has developed over the intervening fifty years but the process has remained something of a mystery. There are three key length scales. The chain folded lamellar thickness is ~ 10nm, the crystal unit cell is ~ 1nm and the detail of the chain conformation is ~ 0.1nm. In previous work these length scales have been addressed using different instrumention or were coupled using compromised geometries. More recently researchers have attempted to exploit coupled time-resolved small-angle and wide-angle x-ray experiments. These turned out to be challenging experiments much related to the challenge of placing the scattering intensity on an absolute scale. However, they did stimulate the possibility of new phenomena in the very early stages of crystallisation. Although there is now considerable doubt on such experiments, they drew attention to the basic question as to the process of crystallisation in long chain molecules. We have used NIMROD on the second target station at ISIS to follow all three length scales in a time-resolving manner for poly(e-caprolactone). The technique can provide a single set of data from 0.01 to 100Å-1 on the same vertical scale. We present the results using a multiple scale model of the crystallisation process in polymers to analyse the results.