993 resultados para Interface Modelling
Resumo:
Adhesively-bonded joints are extensively used in several fields of engineering. Cohesive Zone Models (CZM) have been used for the strength prediction of adhesive joints, as an add-in to Finite Element (FE) analyses that allows simulation of damage growth, by consideration of energetic principles. A useful feature of CZM is that different shapes can be developed for the cohesive laws, depending on the nature of the material or interface to be simulated, allowing an accurate strength prediction. This work studies the influence of the CZM shape (triangular, exponential or trapezoidal) used to model a thin adhesive layer in single-lap adhesive joints, for an estimation of its influence on the strength prediction under different material conditions. By performing this study, guidelines are provided on the possibility to use a CZM shape that may not be the most suited for a particular adhesive, but that may be more straightforward to use/implement and have less convergence problems (e.g. triangular shaped CZM), thus attaining the solution faster. The overall results showed that joints bonded with ductile adhesives are highly influenced by the CZM shape, and that the trapezoidal shape fits best the experimental data. Moreover, the smaller is the overlap length (LO), the greater is the influence of the CZM shape. On the other hand, the influence of the CZM shape can be neglected when using brittle adhesives, without compromising too much the accuracy of the strength predictions.
Resumo:
Tese de doutoramento em Ciências da Educação, área de Teoria Curricular e Ensino das Ciências
Resumo:
Dissertation to obtain the degree of Master in Chemical and Biochemical Engineering
Resumo:
La biologie de la conservation est communément associée à la protection de petites populations menacées d?extinction. Pourtant, il peut également être nécessaire de soumettre à gestion des populations surabondantes ou susceptibles d?une trop grande expansion, dans le but de prévenir les effets néfastes de la surpopulation. Du fait des différences tant quantitatives que qualitatives entre protection des petites populations et contrôle des grandes, il est nécessaire de disposer de modèles et de méthodes distinctes. L?objectif de ce travail a été de développer des modèles prédictifs de la dynamique des grandes populations, ainsi que des logiciels permettant de calculer les paramètres de ces modèles et de tester des scénarios de gestion. Le cas du Bouquetin des Alpes (Capra ibex ibex) - en forte expansion en Suisse depuis sa réintroduction au début du XXème siècle - servit d?exemple. Cette tâche fut accomplie en trois étapes : En premier lieu, un modèle de dynamique locale, spécifique au Bouquetin, fut développé : le modèle sous-jacent - structuré en classes d?âge et de sexe - est basé sur une matrice de Leslie à laquelle ont été ajoutées la densité-dépendance, la stochasticité environnementale et la chasse de régulation. Ce modèle fut implémenté dans un logiciel d?aide à la gestion - nommé SIM-Ibex - permettant la maintenance de données de recensements, l?estimation automatisée des paramètres, ainsi que l?ajustement et la simulation de stratégies de régulation. Mais la dynamique d?une population est influencée non seulement par des facteurs démographiques, mais aussi par la dispersion et la colonisation de nouveaux espaces. Il est donc nécessaire de pouvoir modéliser tant la qualité de l?habitat que les obstacles à la dispersion. Une collection de logiciels - nommée Biomapper - fut donc développée. Son module central est basé sur l?Analyse Factorielle de la Niche Ecologique (ENFA) dont le principe est de calculer des facteurs de marginalité et de spécialisation de la niche écologique à partir de prédicteurs environnementaux et de données d?observation de l?espèce. Tous les modules de Biomapper sont liés aux Systèmes d?Information Géographiques (SIG) ; ils couvrent toutes les opérations d?importation des données, préparation des prédicteurs, ENFA et calcul de la carte de qualité d?habitat, validation et traitement des résultats ; un module permet également de cartographier les barrières et les corridors de dispersion. Le domaine d?application de l?ENFA fut exploré par le biais d?une distribution d?espèce virtuelle. La comparaison à une méthode couramment utilisée pour construire des cartes de qualité d?habitat, le Modèle Linéaire Généralisé (GLM), montra qu?elle était particulièrement adaptée pour les espèces cryptiques ou en cours d?expansion. Les informations sur la démographie et le paysage furent finalement fusionnées en un modèle global. Une approche basée sur un automate cellulaire fut choisie, tant pour satisfaire aux contraintes du réalisme de la modélisation du paysage qu?à celles imposées par les grandes populations : la zone d?étude est modélisée par un pavage de cellules hexagonales, chacune caractérisée par des propriétés - une capacité de soutien et six taux d?imperméabilité quantifiant les échanges entre cellules adjacentes - et une variable, la densité de la population. Cette dernière varie en fonction de la reproduction et de la survie locale, ainsi que de la dispersion, sous l?influence de la densité-dépendance et de la stochasticité. Un logiciel - nommé HexaSpace - fut développé pour accomplir deux fonctions : 1° Calibrer l?automate sur la base de modèles de dynamique (par ex. calculés par SIM-Ibex) et d?une carte de qualité d?habitat (par ex. calculée par Biomapper). 2° Faire tourner des simulations. Il permet d?étudier l?expansion d?une espèce envahisseuse dans un paysage complexe composé de zones de qualité diverses et comportant des obstacles à la dispersion. Ce modèle fut appliqué à l?histoire de la réintroduction du Bouquetin dans les Alpes bernoises (Suisse). SIM-Ibex est actuellement utilisé par les gestionnaires de la faune et par les inspecteurs du gouvernement pour préparer et contrôler les plans de tir. Biomapper a été appliqué à plusieurs espèces (tant végétales qu?animales) à travers le Monde. De même, même si HexaSpace fut initialement conçu pour des espèces animales terrestres, il pourrait aisément être étndu à la propagation de plantes ou à la dispersion d?animaux volants. Ces logiciels étant conçus pour, à partir de données brutes, construire un modèle réaliste complexe, et du fait qu?ils sont dotés d?une interface d?utilisation intuitive, ils sont susceptibles de nombreuses applications en biologie de la conservation. En outre, ces approches peuvent également s?appliquer à des questions théoriques dans les domaines de l?écologie des populations et du paysage.<br/><br/>Conservation biology is commonly associated to small and endangered population protection. Nevertheless, large or potentially large populations may also need human management to prevent negative effects of overpopulation. As there are both qualitative and quantitative differences between small population protection and large population controlling, distinct methods and models are needed. The aim of this work was to develop theoretical models to predict large population dynamics, as well as computer tools to assess the parameters of these models and to test management scenarios. The alpine Ibex (Capra ibex ibex) - which experienced a spectacular increase since its reintroduction in Switzerland at the beginning of the 20th century - was used as paradigm species. This task was achieved in three steps: A local population dynamics model was first developed specifically for Ibex: the underlying age- and sex-structured model is based on a Leslie matrix approach with addition of density-dependence, environmental stochasticity and culling. This model was implemented into a management-support software - named SIM-Ibex - allowing census data maintenance, parameter automated assessment and culling strategies tuning and simulating. However population dynamics is driven not only by demographic factors, but also by dispersal and colonisation of new areas. Habitat suitability and obstacles modelling had therefore to be addressed. Thus, a software package - named Biomapper - was developed. Its central module is based on the Ecological Niche Factor Analysis (ENFA) whose principle is to compute niche marginality and specialisation factors from a set of environmental predictors and species presence data. All Biomapper modules are linked to Geographic Information Systems (GIS); they cover all operations of data importation, predictor preparation, ENFA and habitat suitability map computation, results validation and further processing; a module also allows mapping of dispersal barriers and corridors. ENFA application domain was then explored by means of a simulated species distribution. It was compared to a common habitat suitability assessing method, the Generalised Linear Model (GLM), and was proven better suited for spreading or cryptic species. Demography and landscape informations were finally merged into a global model. To cope with landscape realism and technical constraints of large population modelling, a cellular automaton approach was chosen: the study area is modelled by a lattice of hexagonal cells, each one characterised by a few fixed properties - a carrying capacity and six impermeability rates quantifying exchanges between adjacent cells - and one variable, population density. The later varies according to local reproduction/survival and dispersal dynamics, modified by density-dependence and stochasticity. A software - named HexaSpace - was developed, which achieves two functions: 1° Calibrating the automaton on the base of local population dynamics models (e.g., computed by SIM-Ibex) and a habitat suitability map (e.g. computed by Biomapper). 2° Running simulations. It allows studying the spreading of an invading species across a complex landscape made of variously suitable areas and dispersal barriers. This model was applied to the history of Ibex reintroduction in Bernese Alps (Switzerland). SIM-Ibex is now used by governmental wildlife managers to prepare and verify culling plans. Biomapper has been applied to several species (both plants and animals) all around the World. In the same way, whilst HexaSpace was originally designed for terrestrial animal species, it could be easily extended to model plant propagation or flying animals dispersal. As these softwares were designed to proceed from low-level data to build a complex realistic model and as they benefit from an intuitive user-interface, they may have many conservation applications. Moreover, theoretical questions in the fields of population and landscape ecology might also be addressed by these approaches.
Resumo:
Diagrams and tools help to support task modelling in engi- neering and process management. Unfortunately they are unfit to help in a business context at a strategic level, because of the flexibility needed for creative thinking and user friendly interactions. We propose a tool which bridges the gap between freedom of actions, encouraging creativity, and constraints, allowing validation and advanced features.
Resumo:
Many three-dimensional (3-D) structures in rock, which formed during the deformation of the Earth's crust and lithosphere, are controlled by a difference in mechanical strength between rock units and are often the result of a geometrical instability. Such structures are, for example, folds, pinch-and-swell structures (due to necking) or cuspate-lobate structures (mullions). These struc-tures occur from the centimeter to the kilometer scale and the related deformation processes con-trol the formation of, for example, fold-and-thrust belts and extensional sedimentary basins or the deformation of the basement-cover interface. The 2-D deformation processes causing these structures are relatively well studied, however, several processes during large-strain 3-D defor-mation are still incompletely understood. One of these 3-D processes is the lateral propagation of these structures, such as fold and cusp propagation in a direction orthogonal to the shortening direction or neck propagation in direction orthogonal to the extension direction. Especially, we are interested in fold nappes which are recumbent folds with amplitudes usually exceeding 10 km and they have been presumably formed by ductile shearing. They often exhibit a constant sense of shearing and a non-linear increase of shear strain towards their overturned limb. The fold axes of the Morcles fold nappe in western Switzerland plunges to the ENE whereas the fold axes in the more eastern Doldenhorn nappe plunges to the WSW. These opposite plunge direc-tions characterize the Rawil depression (Wildstrubel depression). The Morcles nappe is mainly the result of layer parallel contraction and shearing. During the compression the massive lime-stones were more competent than the surrounding marls and shales, which led to the buckling characteristics of the Morcles nappe, especially in the north-dipping normal limb. The Dolden-horn nappe exhibits only a minor overturned fold limb. There are still no 3-D numerical studies which investigate the fundamental dynamics of the formation of the large-scale 3-D structure including the Morcles and Doldenhorn nappes and the related Rawil depression. We study the 3-D evolution of geometrical instabilities and fold nappe formation with numerical simulations based on the finite element method (FEM). Simulating geometrical instabilities caused by sharp variations of mechanical strength between rock units requires a numerical algorithm that can accurately resolve material interfaces for large differences in material properties (e.g. between limestone and shale) and for large deformations. Therefore, our FE algorithm combines a nu-merical contour-line technique and a deformable Lagrangian mesh with re-meshing. With this combined method it is possible to accurately follow the initial material contours with the FE mesh and to accurately resolve the geometrical instabilities. The algorithm can simulate 3-D de-formation for a visco-elastic rheology. The viscous rheology is described by a power-law flow law. The code is used to study the 3-D fold nappe formation, the lateral propagation of folding and also the lateral propagation of cusps due to initial half graben geometry. Thereby, the small initial geometrical perturbations for folding and necking are exactly followed by the FE mesh, whereas the initial large perturbation describing a half graben is defined by a contour line inter-secting the finite elements. Further, the 3-D algorithm is applied to 3-D viscous nacking during slab detachment. The results from various simulations are compared with 2-D resulats and a 1-D analytical solution. -- On retrouve beaucoup de structures en 3 dimensions (3-D) dans les roches qui ont pour origines une déformation de la lithosphère terrestre. Ces structures sont par exemple des plis, des boudins (pinch-and-swell) ou des mullions (cuspate-lobate) et sont présentés de l'échelle centimétrique à kilométrique. Mécaniquement, ces structures peuvent être expliquées par une différence de résistance entre les différentes unités de roches et sont généralement le fruit d'une instabilité géométrique. Ces différences mécaniques entre les unités contrôlent non seulement les types de structures rencontrées, mais également le type de déformation (thick skin, thin skin) et le style tectonique (bassin d'avant pays, chaîne d'avant pays). Les processus de la déformation en deux dimensions (2-D) formant ces structures sont relativement bien compris. Cependant, lorsque l'on ajoute la troisiéme dimension, plusieurs processus ne sont pas complètement compris lors de la déformation à large échelle. L'un de ces processus est la propagation latérale des structures, par exemple la propagation de plis ou de mullions dans la direction perpendiculaire à l'axe de com-pression, ou la propagation des zones d'amincissement des boudins perpendiculairement à la direction d'extension. Nous sommes particulièrement intéressés les nappes de plis qui sont des nappes de charriage en forme de plis couché d'une amplitude plurikilométrique et étant formées par cisaillement ductile. La plupart du temps, elles exposent un sens de cisaillement constant et une augmentation non linéaire de la déformation vers la base du flanc inverse. Un exemple connu de nappes de plis est le domaine Helvétique dans les Alpes de l'ouest. Une de ces nap-pes est la Nappe de Morcles dont l'axe de pli plonge E-NE tandis que de l'autre côté de la dépression du Rawil (ou dépression du Wildstrubel), la nappe du Doldenhorn (équivalent de la nappe de Morcles) possède un axe de pli plongeant O-SO. La forme particulière de ces nappes est due à l'alternance de couches calcaires mécaniquement résistantes et de couches mécanique-ment faibles constituées de schistes et de marnes. Ces différences mécaniques dans les couches permettent d'expliquer les plissements internes à la nappe, particulièrement dans le flanc inver-se de la nappe de Morcles. Il faut également noter que le développement du flanc inverse des nappes n'est pas le même des deux côtés de la dépression de Rawil. Ainsi la nappe de Morcles possède un important flanc inverse alors que la nappe du Doldenhorn en est presque dépour-vue. A l'heure actuelle, aucune étude numérique en 3-D n'a été menée afin de comprendre la dynamique fondamentale de la formation des nappes de Morcles et du Doldenhorn ainsi que la formation de la dépression de Rawil. Ce travail propose la première analyse de l'évolution 3-D des instabilités géométriques et de la formation des nappes de plis en utilisant des simulations numériques. Notre modèle est basé sur la méthode des éléments finis (FEM) qui permet de ré-soudre avec précision les interfaces entre deux matériaux ayant des propriétés mécaniques très différentes (par exemple entre les couches calcaires et les couches marneuses). De plus nous utilisons un maillage lagrangien déformable avec une fonction de re-meshing (production d'un nouveau maillage). Grâce à cette méthode combinée il nous est possible de suivre avec précisi-on les interfaces matérielles et de résoudre avec précision les instabilités géométriques lors de la déformation de matériaux visco-élastiques décrit par une rhéologie non linéaire (n>1). Nous uti-lisons cet algorithme afin de comprendre la formation des nappes de plis, la propagation latérale du plissement ainsi que la propagation latérale des structures de type mullions causé par une va-riation latérale de la géométrie (p.ex graben). De plus l'algorithme est utilisé pour comprendre la dynamique 3-D de l'amincissement visqueux et de la rupture de la plaque descendante en zone de subduction. Les résultats obtenus sont comparés à des modèles 2-D et à la solution analytique 1-D. -- Viele drei dimensionale (3-D) Strukturen, die in Gesteinen vorkommen und durch die Verfor-mung der Erdkruste und Litosphäre entstanden sind werden von den unterschiedlichen mechani-schen Eigenschaften der Gesteinseinheiten kontrolliert und sind häufig das Resulat von geome-trischen Istabilitäten. Zu diesen strukturen zählen zum Beispiel Falten, Pich-and-swell Struktu-ren oder sogenannte Cusbate-Lobate Strukturen (auch Mullions). Diese Strukturen kommen in verschiedenen Grössenordungen vor und können Masse von einigen Zentimeter bis zu einigen Kilometer aufweisen. Die mit der Entstehung dieser Strukturen verbundenen Prozesse kontrol-lieren die Entstehung von Gerbirgen und Sediment-Becken sowie die Verformung des Kontaktes zwischen Grundgebirge und Stedimenten. Die zwei dimensionalen (2-D) Verformungs-Prozesse die zu den genannten Strukturen führen sind bereits sehr gut untersucht. Einige Prozesse wäh-rend starker 3-D Verformung sind hingegen noch unvollständig verstanden. Einer dieser 3-D Prozesse ist die seitliche Fortpflanzung der beschriebenen Strukturen, so wie die seitliche Fort-pflanzung von Falten und Cusbate-Lobate Strukturen senkrecht zur Verkürzungsrichtung und die seitliche Fortpflanzung von Pinch-and-Swell Strukturen othogonal zur Streckungsrichtung. Insbesondere interessieren wir uns für Faltendecken, liegende Falten mit Amplituden von mehr als 10 km. Faltendecken entstehen vermutlich durch duktile Verscherung. Sie zeigen oft einen konstanten Scherungssinn und eine nicht-lineare zunahme der Scherverformung am überkipp-ten Schenkel. Die Faltenachsen der Morcles Decke in der Westschweiz fallen Richtung ONO während die Faltenachsen der östicher gelegenen Doldenhorn Decke gegen WSW einfallen. Diese entgegengesetzten Einfallrichtungen charakterisieren die Rawil Depression (Wildstrubel Depression). Die Morcles Decke ist überwiegend das Resultat von Verkürzung und Scherung parallel zu den Sedimentlagen. Während der Verkürzung verhielt sich der massive Kalkstein kompetenter als der Umliegende Mergel und Schiefer, was zur Verfaltetung Morcles Decke führ-te, vorallem in gegen Norden eifallenden überkippten Schenkel. Die Doldenhorn Decke weist dagegen einen viel kleineren überkippten Schenkel und eine stärkere Lokalisierung der Verfor-mung auf. Bis heute gibt es keine 3-D numerischen Studien, die die fundamentale Dynamik der Entstehung von grossen stark verformten 3-D Strukturen wie den Morcles und Doldenhorn Decken sowie der damit verbudenen Rawil Depression untersuchen. Wir betrachten die 3-D Ent-wicklung von geometrischen Instabilitäten sowie die Entstehung fon Faltendecken mit Hilfe von numerischen Simulationen basiert auf der Finite Elemente Methode (FEM). Die Simulation von geometrischen Instabilitäten, die aufgrund von Änderungen der Materialeigenschaften zwischen verschiedenen Gesteinseinheiten entstehen, erfortert einen numerischen Algorithmus, der in der Lage ist die Materialgrenzen mit starkem Kontrast der Materialeigenschaften (zum Beispiel zwi-schen Kalksteineinheiten und Mergel) für starke Verfomung genau aufzulösen. Um dem gerecht zu werden kombiniert unser FE Algorithmus eine numerische Contour-Linien-Technik und ein deformierbares Lagranges Netz mit Re-meshing. Mit dieser kombinierten Methode ist es mög-lich den anfänglichen Materialgrenzen mit dem FE Netz genau zu folgen und die geometrischen Instabilitäten genügend aufzulösen. Der Algorithmus ist in der Lage visko-elastische 3-D Ver-formung zu rechnen, wobei die viskose Rheologie mit Hilfe eines power-law Fliessgesetzes beschrieben wird. Mit dem numerischen Algorithmus untersuchen wir die Entstehung von 3-D Faltendecken, die seitliche Fortpflanzung der Faltung sowie der Cusbate-Lobate Strukturen die sich durch die Verkürzung eines mit Sediment gefüllten Halbgraben bilden. Dabei werden die anfänglichen geometrischen Instabilitäten der Faltung exakt mit dem FE Netz aufgelöst wäh-rend die Materialgranzen des Halbgrabens die Finiten Elemente durchschneidet. Desweiteren wird der 3-D Algorithmus auf die Einschnürung während der 3-D viskosen Plattenablösung und Subduktion angewandt. Die 3-D Resultate werden mit 2-D Ergebnissen und einer 1-D analyti-schen Lösung verglichen.
Resumo:
The condensation rate has to be high in the safety pressure suppression pool systems of Boiling Water Reactors (BWR) in order to fulfill their safety function. The phenomena due to such a high direct contact condensation (DCC) rate turn out to be very challenging to be analysed either with experiments or numerical simulations. In this thesis, the suppression pool experiments carried out in the POOLEX facility of Lappeenranta University of Technology were simulated. Two different condensation modes were modelled by using the 2-phase CFD codes NEPTUNE CFD and TransAT. The DCC models applied were the typical ones to be used for separated flows in channels, and their applicability to the rapidly condensing flow in the condensation pool context had not been tested earlier. A low Reynolds number case was the first to be simulated. The POOLEX experiment STB-31 was operated near the conditions between the ’quasi-steady oscillatory interface condensation’ mode and the ’condensation within the blowdown pipe’ mode. The condensation models of Lakehal et al. and Coste & Lavi´eville predicted the condensation rate quite accurately, while the other tested ones overestimated it. It was possible to get the direct phase change solution to settle near to the measured values, but a very high resolution of calculation grid was needed. Secondly, a high Reynolds number case corresponding to the ’chugging’ mode was simulated. The POOLEX experiment STB-28 was chosen, because various standard and highspeed video samples of bubbles were recorded during it. In order to extract numerical information from the video material, a pattern recognition procedure was programmed. The bubble size distributions and the frequencies of chugging were calculated with this procedure. With the statistical data of the bubble sizes and temporal data of the bubble/jet appearance, it was possible to compare the condensation rates between the experiment and the CFD simulations. In the chugging simulations, a spherically curvilinear calculation grid at the blowdown pipe exit improved the convergence and decreased the required cell count. The compressible flow solver with complete steam-tables was beneficial for the numerical success of the simulations. The Hughes-Duffey model and, to some extent, the Coste & Lavi´eville model produced realistic chugging behavior. The initial level of the steam/water interface was an important factor to determine the initiation of the chugging. If the interface was initialized with a water level high enough inside the blowdown pipe, the vigorous penetration of a water plug into the pool created a turbulent wake which invoked the chugging that was self-sustaining. A 3D simulation with a suitable DCC model produced qualitatively very realistic shapes of the chugging bubbles and jets. The comparative FFT analysis of the bubble size data and the pool bottom pressure data gave useful information to distinguish the eigenmodes of chugging, bubbling, and pool structure oscillations.
Resumo:
Aquatic sediments often remove hydrophobic contaminants from fresh waters. The subsequent distribution and concentration of contaminants in bed sediments determines their effect on benthic organisms and the risk of re-entry into the water and/or leaching to groundwater. This study examines the transport of simazine and lindane in aquatic bed sediments with the aim of understanding the processes that determine their depth distribution. Experiments in flume channels (water flow of 10 cm s(-1)) determined the persistence of the compounds in the absence of sediment with (a) de-ionised water and (b) a solution that had been in contact with river sediment. In further experiments with river bed sediments in light and dark conditions, measurements were made of the concentration of the compounds in the overlying water and the development of bacterial/algal biofilms and bioturbation activity. At the end of the experiments, concentrations in sediments and associated pore waters were determined in sections of the sediment at 1 mm resolution down to 5 mm and then at 10 mm resolution to 50 mm depth and these distributions analysed using a sorption-diffusion-degradation model. The fine resolution in the depth profile permitted the detection of a maximum in the concentration of the compounds in the pore water near the surface, whereas concentrations in the sediment increased to a maximum at the surface itself. Experimental distribution coefficients determined from the pore water and sediment concentrations indicated a gradient with depth that was partly explained by an increase in organic matter content and specific surface area of the solids near the interface. The modelling showed that degradation of lindane within the sediment was necessary to explain the concentration profiles, with the optimum agreement between the measured and theoretical profiles obtained with differential degradation in the oxic and anoxic zones. The compounds penetrated to a depth of 40-50 rum over a period of 42 days. (C) 2004 Society of Chemical Industry.
Resumo:
Europe's widely distributed climate modelling expertise, now organized in the European Network for Earth System Modelling (ENES), is both a strength and a challenge. Recognizing this, the European Union's Program for Integrated Earth System Modelling (PRISM) infrastructure project aims at designing a flexible and friendly user environment to assemble, run and post-process Earth System models. PRISM was started in December 2001 with a duration of three years. This paper presents the major stages of PRISM, including: (1) the definition and promotion of scientific and technical standards to increase component modularity; (2) the development of an end-to-end software environment (graphical user interface, coupling and I/O system, diagnostics, visualization) to launch, monitor and analyse complex Earth system models built around state-of-art community component models (atmosphere, ocean, atmospheric chemistry, ocean bio-chemistry, sea-ice, land-surface); and (3) testing and quality standards to ensure high-performance computing performance on a variety of platforms. PRISM is emerging as a core strategic software infrastructure for building the European research area in Earth system sciences. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Lava domes comprise core, carapace, and clastic talus components. They can grow endogenously by inflation of a core and/or exogenously with the extrusion of shear bounded lobes and whaleback lobes at the surface. Internal structure is paramount in determining the extent to which lava dome growth evolves stably, or conversely the propensity for collapse. The more core lava that exists within a dome, in both relative and absolute terms, the more explosive energy is available, both for large pyroclastic flows following collapse and in particular for lateral blast events following very rapid removal of lateral support to the dome. Knowledge of the location of the core lava within the dome is also relevant for hazard assessment purposes. A spreading toe, or lobe of core lava, over a talus substrate may be both relatively unstable and likely to accelerate to more violent activity during the early phases of a retrogressive collapse. Soufrière Hills Volcano, Montserrat has been erupting since 1995 and has produced numerous lava domes that have undergone repeated collapse events. We consider one continuous dome growth period, from August 2005 to May 2006 that resulted in a dome collapse event on 20th May 2006. The collapse event lasted 3 h, removing the whole dome plus dome remnants from a previous growth period in an unusually violent and rapid collapse event. We use an axisymmetrical computational Finite Element Method model for the growth and evolution of a lava dome. Our model comprises evolving core, carapace and talus components based on axisymmetrical endogenous dome growth, which permits us to model the interface between talus and core. Despite explicitly only modelling axisymmetrical endogenous dome growth our core–talus model simulates many of the observed growth characteristics of the 2005–2006 SHV lava dome well. Further, it is possible for our simulations to replicate large-scale exogenous characteristics when a considerable volume of talus has accumulated around the lower flanks of the dome. Model results suggest that dome core can override talus within a growing dome, potentially generating a region of significant weakness and a potential locus for collapse initiation.
Resumo:
Several studies have highlighted the importance of the cooling period in oil absorption in deep-fat fried products. Specifically, it has been established that the largest proportion of oil which ends up into the food, is sucked into the porous crust region after the fried product is removed from the oil bath, stressing the importance of this time interval. The main objective of this paper was to develop a predictive mechanistic model that can be used to understand the principles behind post-frying cooling oil absorption kinetics, which can also help identifying the key parameters that affect the final oil intake by the fried product. The model was developed for two different geometries, an infinite slab and an infinite cylinder, and was divided into two main sub-models, one describing the immersion frying period itself and the other describing the post-frying cooling period. The immersion frying period was described by a transient moving-front model that considered the movement of the crust/core interface, whereas post-frying cooling oil absorption was considered to be a pressure driven flow mediated by capillary forces. A key element in the model was the hypothesis that oil suction would only begin once a positive pressure driving force had developed. The mechanistic model was based on measurable physical and thermal properties, and process parameters with no need of empirical data fitting, and can be used to study oil absorption in any deep-fat fried product that satisfies the assumptions made.
Resumo:
This paper outlines some rehabilitation applications of manipulators and identifies that new approaches demand that the robot make an intimate contact with the user. Design of new generations of manipulators with programmable compliance along with higher level controllers that can set the compliance appropriately for the task, are both feasible propositions. We must thus gain a greater insight into the way in which a person interacts with a machine, particularly given that the interaction may be non-passive. We are primarily interested in the change in wrist and arm dynamics as the person co-contracts his/her muscles. It is observed that this leads to a change in stiffness that can push an actuated interface into a limit cycle. We use both experimental results gathered from a PHANToM haptic interface and a mathematical model to observe this effect. Results are relevant to the fields of rehabilitation and therapy robots, haptic interfaces, and telerobotics
Resumo:
Much consideration is rightly given to the design of metadata models to describe data. At the other end of the data-delivery spectrum much thought has also been given to the design of geospatial delivery interfaces such as the Open Geospatial Consortium standards, Web Coverage Service (WCS), Web Map Server and Web Feature Service (WFS). Our recent experience with the Climate Science Modelling Language shows that an implementation gap exists where many challenges remain unsolved. To bridge this gap requires transposing information and data from one world view of geospatial climate data to another. Some of the issues include: the loss of information in mapping to a common information model, the need to create ‘views’ onto file-based storage, and the need to map onto an appropriate delivery interface (as with the choice between WFS and WCS for feature types with coverage-valued properties). Here we summarise the approaches we have taken in facing up to these problems.
Resumo:
The DNA G-qadruplexes are one of the targets being actively explored for anti-cancer therapy by inhibiting them through small molecules. This computational study was conducted to predict the binding strengths and orientations of a set of novel dimethyl-amino-ethyl-acridine (DACA) analogues that are designed and synthesized in our laboratory, but did not diffract in Synchrotron light.Thecrystal structure of DNA G-Quadruplex(TGGGGT)4(PDB: 1O0K) was used as target for their binding properties in our studies.We used both the force field (FF) and QM/MM derived atomic charge schemes simultaneously for comparing the predictions of drug binding modes and their energetics. This study evaluates the comparative performance of fixed point charge based Glide XP docking and the quantum polarized ligand docking schemes. These results will provide insights on the effects of including or ignoring the drug-receptor interfacial polarization events in molecular docking simulations, which in turn, will aid the rational selection of computational methods at different levels of theory in future drug design programs. Plenty of molecular modelling tools and methods currently exist for modelling drug-receptor or protein-protein, or DNA-protein interactionssat different levels of complexities.Yet, the capasity of such tools to describevarious physico-chemical propertiesmore accuratelyis the next step ahead in currentresearch.Especially, the usage of most accurate methods in quantum mechanics(QM) is severely restricted by theirtedious nature. Though the usage of massively parallel super computing environments resulted in a tremendous improvement in molecular mechanics (MM) calculations like molecular dynamics,they are still capable of dealing with only a couple of tens to hundreds of atoms for QM methods. One such efficient strategy that utilizes thepowers of both MM and QM are the QM/MM hybrid methods. Lately, attempts have been directed towards the goal of deploying several different QM methods for betterment of force field based simulations, but with practical restrictions in place. One of such methods utilizes the inclusion of charge polarization events at the drug-receptor interface, that is not explicitly present in the MM FF.
Resumo:
Understanding how species and ecosystems respond to climate change has become a major focus of ecology and conservation biology. Modelling approaches provide important tools for making future projections, but current models of the climate-biosphere interface remain overly simplistic, undermining the credibility of projections. We identify five ways in which substantial advances could be made in the next few years: (i) improving the accessibility and efficiency of biodiversity monitoring data, (ii) quantifying the main determinants of the sensitivity of species to climate change, (iii) incorporating community dynamics into projections of biodiversity responses, (iv) accounting for the influence of evolutionary processes on the response of species to climate change, and (v) improving the biophysical rule sets that define functional groupings of species in global models.