931 resultados para Finite Elements, Masonry, Reinforced Masonry, Constitutive Modelling
Resumo:
We are interested in the development, implementation and testing of an orthotropic model for cardiac contraction based on an active strain decomposition. Our model addresses the coupling of a transversely isotropic mechanical description at the cell level, with an orthotropic constitutive law for incompressible tissue at the macroscopic level. The main differences with the active stress model are addressed in detail, and a finite element discretization using Taylor-Hood and MINI elements is proposed and illustrated with numerical examples.
Resumo:
Background: Complex wounds pose a major challenge in reconstructive and trauma surgery. Several approaches to increase the healing process have been proposed in the last decades. In this study we study the mechanism of action of the Vacuum Assisted Closure device in diabetic wounds. Methods: Full-thickness wounds were excised in diabetic mice and treated with the VAC device or its isolated components: an occlusive dressing (OD) alone, subathmospheric pressure at 125 mm Hg (Suction), and a polyurethane foam without (Foam) and with (Foamc) downward compression of approximately 125 mm Hg. The last goups were treated with either the complete VAC device (VAC) or with a silicne interface that alows fluid removel (Mepithel-VAC). The effects of the treatment modes on the wound surface were quantified by a two-dimensional immunohistochemical staging system based on vasculature, as defined by blood vessel density (CD31) and cell proliferation (defined by ki67 positivity), 7 days post wounding. Finite element modelling was used to predict wound surface deformation under dressing modes and cross sections of in situ fixed tissues were used to measure actual microstrain. Results: The foam-wound interface of the Vacuum Assisted Closure device causes significant wound stains (60%) causing a deformation of the single cell level leading to a profound upregulation of cell proliferation (4-fold) and angiogenisis (2.2-fold) compared to OD treated wounds. Polyurethane foam exposure itself causes a frather unspecific angiogenic response (Foamc, 2 - fold, Foam, 2.2 - fold) without changes of the cell proliferation rate of the wound bed. Suction alone without a specific interface does not have an effect on meassured parameters, showing similar results to untreated wounds. A perforated silicone interface caused a significant lower microdeforamtion of the wound bed correlating to changes of the wound tissues. Conclusion: The Vacuum Assisted Closure device induce significanttissue growth in diabetic wounds. The wound foam interface under suction causes profound macrodeformation that stimulates tissue growth by angiogenesis and cell proliferation. It needs to be taken in consideration that in the clinical setting different wound types may profit from different elements of this suction device.
Resumo:
Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.
Resumo:
Depuis le séminaire H. Cartan de 1954-55, il est bien connu que l'on peut trouver des éléments de torsion arbitrairement grande dans l'homologie entière des espaces d'Eilenberg-MacLane K(G,n) où G est un groupe abélien non trivial et n>1. L'objectif majeur de ce travail est d'étendre ce résultat à des H-espaces possédant plus d'un groupe d'homotopie non trivial. Dans le but de contrôler précisément le résultat de H. Cartan, on commence par étudier la dualité entre l'homologie et la cohomologie des espaces d'Eilenberg-MacLane 2-locaux de type fini. On parvient ainsi à raffiner quelques résultats qui découlent des calculs de H. Cartan. Le résultat principal de ce travail peut être formulé comme suit. Soit X un H-espace ne possédant que deux groupes d'homotopie non triviaux, tous deux finis et de 2-torsion. Alors X n'admet pas d'exposant pour son groupe gradué d'homologie entière réduite. On construit une large classe d'espaces pour laquelle ce résultat n'est qu'une conséquence d'une caractéristique topologique, à savoir l'existence d'un rétract faible X K(G,n) pour un certain groupe abélien G et n>1. On généralise également notre résultat principal à des espaces plus compliqués en utilisant la suite spectrale d'Eilenberg-Moore ainsi que des méthodes analytiques faisant apparaître les nombres de Betti et leur comportement asymptotique. Finalement, on conjecture que les espaces qui ne possédent qu'un nombre fini de groupes d'homotopie non triviaux n'admettent pas d'exposant homologique. Ce travail contient par ailleurs la présentation de la « machine d'Eilenberg-MacLane », un programme C++ conçu pour calculer explicitement les groupes d'homologie entière des espaces d'Eilenberg-MacLane. <br/><br/>By the work of H. Cartan, it is well known that one can find elements of arbitrarilly high torsion in the integral (co)homology groups of an Eilenberg-MacLane space K(G,n), where G is a non-trivial abelian group and n>1. The main goal of this work is to extend this result to H-spaces having more than one non-trivial homotopy groups. In order to have an accurate hold on H. Cartan's result, we start by studying the duality between homology and cohomology of 2-local Eilenberg-MacLane spaces of finite type. This leads us to some improvements of H. Cartan's methods in this particular case. Our main result can be stated as follows. Let X be an H-space with two non-vanishing finite 2-torsion homotopy groups. Then X does not admit any exponent for its reduced integral graded (co)homology group. We construct a wide class of examples for which this result is a simple consequence of a topological feature, namely the existence of a weak retract X K(G,n) for some abelian group G and n>1. We also generalize our main result to more complicated stable two stage Postnikov systems, using the Eilenberg-Moore spectral sequence and analytic methods involving Betti numbers and their asymptotic behaviour. Finally, we investigate some guesses on the non-existence of homology exponents for finite Postnikov towers. We conjecture that Postnikov pieces do not admit any (co)homology exponent. This work also includes the presentation of the "Eilenberg-MacLane machine", a C++ program designed to compute explicitely all integral homology groups of Eilenberg-MacLane spaces. <br/><br/>Il est toujours difficile pour un mathématicien de parler de son travail. La difficulté réside dans le fait que les objets qu'il étudie sont abstraits. On rencontre assez rarement un espace vectoriel, une catégorie abélienne ou une transformée de Laplace au coin de la rue ! Cependant, même si les objets mathématiques sont difficiles à cerner pour un non-mathématicien, les méthodes pour les étudier sont essentiellement les mêmes que celles utilisées dans les autres disciplines scientifiques. On décortique les objets complexes en composantes plus simples à étudier. On dresse la liste des propriétés des objets mathématiques, puis on les classe en formant des familles d'objets partageant un caractère commun. On cherche des façons différentes, mais équivalentes, de formuler un problème. Etc. Mon travail concerne le domaine mathématique de la topologie algébrique. Le but ultime de cette discipline est de parvenir à classifier tous les espaces topologiques en faisant usage de l'algèbre. Cette activité est comparable à celle d'un ornithologue (topologue) qui étudierait les oiseaux (les espaces topologiques) par exemple à l'aide de jumelles (l'algèbre). S'il voit un oiseau de petite taille, arboricole, chanteur et bâtisseur de nids, pourvu de pattes à quatre doigts, dont trois en avant et un, muni d'une forte griffe, en arrière, alors il en déduira à coup sûr que c'est un passereau. Il lui restera encore à déterminer si c'est un moineau, un merle ou un rossignol. Considérons ci-dessous quelques exemples d'espaces topologiques: a) un cube creux, b) une sphère et c) un tore creux (c.-à-d. une chambre à air). a) b) c) Si toute personne normalement constituée perçoit ici trois figures différentes, le topologue, lui, n'en voit que deux ! De son point de vue, le cube et la sphère ne sont pas différents puisque ils sont homéomorphes: on peut transformer l'un en l'autre de façon continue (il suffirait de souffler dans le cube pour obtenir la sphère). Par contre, la sphère et le tore ne sont pas homéomorphes: triturez la sphère de toutes les façons (sans la déchirer), jamais vous n'obtiendrez le tore. Il existe un infinité d'espaces topologiques et, contrairement à ce que l'on serait naïvement tenté de croire, déterminer si deux d'entre eux sont homéomorphes est très difficile en général. Pour essayer de résoudre ce problème, les topologues ont eu l'idée de faire intervenir l'algèbre dans leurs raisonnements. Ce fut la naissance de la théorie de l'homotopie. Il s'agit, suivant une recette bien particulière, d'associer à tout espace topologique une infinité de ce que les algébristes appellent des groupes. Les groupes ainsi obtenus sont appelés groupes d'homotopie de l'espace topologique. Les mathématiciens ont commencé par montrer que deux espaces topologiques qui sont homéomorphes (par exemple le cube et la sphère) ont les même groupes d'homotopie. On parle alors d'invariants (les groupes d'homotopie sont bien invariants relativement à des espaces topologiques qui sont homéomorphes). Par conséquent, deux espaces topologiques qui n'ont pas les mêmes groupes d'homotopie ne peuvent en aucun cas être homéomorphes. C'est là un excellent moyen de classer les espaces topologiques (pensez à l'ornithologue qui observe les pattes des oiseaux pour déterminer s'il a affaire à un passereau ou non). Mon travail porte sur les espaces topologiques qui n'ont qu'un nombre fini de groupes d'homotopie non nuls. De tels espaces sont appelés des tours de Postnikov finies. On y étudie leurs groupes de cohomologie entière, une autre famille d'invariants, à l'instar des groupes d'homotopie. On mesure d'une certaine manière la taille d'un groupe de cohomologie à l'aide de la notion d'exposant; ainsi, un groupe de cohomologie possédant un exposant est relativement petit. L'un des résultats principaux de ce travail porte sur une étude de la taille des groupes de cohomologie des tours de Postnikov finies. Il s'agit du théorème suivant: un H-espace topologique 1-connexe 2-local et de type fini qui ne possède qu'un ou deux groupes d'homotopie non nuls n'a pas d'exposant pour son groupe gradué de cohomologie entière réduite. S'il fallait interpréter qualitativement ce résultat, on pourrait dire que plus un espace est petit du point de vue de la cohomologie (c.-à-d. s'il possède un exposant cohomologique), plus il est intéressant du point de vue de l'homotopie (c.-à-d. il aura plus de deux groupes d'homotopie non nuls). Il ressort de mon travail que de tels espaces sont très intéressants dans le sens où ils peuvent avoir une infinité de groupes d'homotopie non nuls. Jean-Pierre Serre, médaillé Fields en 1954, a montré que toutes les sphères de dimension >1 ont une infinité de groupes d'homotopie non nuls. Des espaces avec un exposant cohomologique aux sphères, il n'y a qu'un pas à franchir...
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.
Resumo:
In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.
Resumo:
[spa] Mediante el Análisis de Elementos Finitos es posible evaluar un diseño cerámico en función de su tipologia y de las propiedades mecánicas del material. Su aplicación permite considerar los factores tecnológicos que puedan haber condicionado el cambio de una tipologia cerámica. Este análisis se ilustra con los primeros tipos anfóricos romanos producidos en la actual Cataluña (Dressel 1, Tarraconense 1 y Pascuai 7).
Resumo:
Partial-thickness tears of the supraspinatus tendon frequently occur at its insertion on the greater tubercule of the humerus, causing pain and reduced strength and range of motion. The goal of this work was to quantify the loss of loading capacity due to tendon tears at the insertion area. A finite element model of the supraspinatus tendon was developed using in vivo magnetic resonance images data. The tendon was represented by an anisotropic hyperelastic constitutive law identified with experimental measurements. A failure criterion was proposed and calibrated with experimental data. A partial-thickness tear was gradually increased, starting from the deep articular-sided fibres. For different values of tendon tear thickness, the tendon was mechanically loaded up to failure. The numerical model predicted a loss in loading capacity of the tendon as the tear thickness progressed. Tendon failure was more likely when the tendon tear exceeded 20%. The predictions of the model were consistent with experimental studies. Partial-thickness tears below 40% tear are sufficiently stable to persist physiotherapeutic exercises. Above 60% tear surgery should be considered to restore shoulder strength.
Resumo:
A continuum damage model for the prediction of damage onset and structural collapse of structures manufactured in fiber-reinforced plastic laminates is proposed. The principal damage mechanisms occurring in the longitudinal and transverse directions of a ply are represented by a damage tensor that is fixed in space. Crack closure under load reversal effects are taken into account using damage variables established as a function of the sign of the components of the stress tensor. Damage activation functions based on the LaRC04 failure criteria are used to predict the different damage mechanisms occurring at the ply level. The constitutive damage model is implemented in a finite element code. The objectivity of the numerical model is assured by regularizing the dissipated energy at a material point using Bazant’s Crack Band Model. To verify the accuracy of the approach, analyses ofcoupon specimens were performed, and the numerical predictions were compared with experimental data
Resumo:
Currently, the standards that deal with the determination of the properties of rigidity and strength for structural round timber elements do not take in consideration in their calculations and mathematical models the influence of the existing irregularities in the geometry of these elements. This study has as objective to determine the effective value of the modulus of longitudinal elasticity for structural round timber pieces of the Eucalyptus citriodora genus by a technique of optimization allied to the Inverse Analysis Method, to the Finite Element Method and the Least Square Method.
Resumo:
The bioavailability of metals and their potential for environmental pollution depends not simply on total concentrations, but is to a great extent determined by their chemical form. Consequently, knowledge of aqueous metal species is essential in investigating potential metal toxicity and mobility. The overall aim of this thesis is, thus, to determine the species of major and trace elements and the size distribution among the different forms (e.g. ions, molecules and mineral particles) in selected metal-enriched Boreal river and estuarine systems by utilising filtration techniques and geochemical modelling. On the basis of the spatial physicochemical patterns found, the fractionation and complexation processes of elements (mainly related to input of humic matter and pH-change) were examined. Dissolved (<1 kDa), colloidal (1 kDa-0.45 μm) and particulate (>0.45 μm) size fractions of sulfate, organic carbon (OC) and 44 metals/metalloids were investigated in the extremely acidic Vörå River system and its estuary in W Finland, and in four river systems in SW Finland (Sirppujoki, Laajoki, Mynäjoki and Paimionjoki), largely affected by soil erosion and acid sulfate (AS) soils. In addition, geochemical modelling was used to predict the formation of free ions and complexes in these investigated waters. One of the most important findings of this study is that the very large amounts of metals known to be released from AS soils (including Al, Ca, Cd, Co, Cu, Mg, Mn, Na, Ni, Si, U and the lanthanoids) occur and can prevail mainly in toxic forms throughout acidic river systems; as free ions and/or sulfate-complexes. This has serious effects on the biota and especially dissolved Al is expected to have acute effects on fish and other organisms, but also other potentially toxic dissolved elements (e.g. Cd, Cu, Mn and Ni) can have fatal effects on the biota in these environments. In upstream areas that are generally relatively forested (higher pH and contents of OC) fewer bioavailable elements (including Al, Cu, Ni and U) may be found due to complexation with the more abundantly occurring colloidal OC. In the rivers in SW Finland total metal concentrations were relatively high, but most of the elements occurred largely in a colloidal or particulate form and even elements expected to be very soluble (Ca, K, Mg, Na and Sr) occurred to a large extent in colloidal form. According to geochemical modelling, these patterns may only to a limited extent be explained by in-stream metal complexation/adsorption. Instead there were strong indications that the high metal concentrations and dominant solid fractions were largely caused by erosion of metal bearing phyllosilicates. A strong influence of AS soils, known to exist in the catchment, could be clearly distinguished in the Sirppujoki River as it had very high concentrations of a metal sequence typical of AS soils in a dissolved form (Ba, Br, Ca, Cd, Co, K, Mg, Mn, Na, Ni, Rb and Sr). In the Paimionjoki River, metal concentrations (including Ba, Cs, Fe, Hf, Pb, Rb, Si, Th, Ti, Tl and V; not typical of AS soils in the area) were high, but it was found that the main cause of this was erosion of metal bearing phyllosilicates and thus these metals occurred dominantly in less toxic colloidal and particulate fractions. In the two nearby rivers (Laajoki and Mynäjoki) there was influence of AS soils, but it was largely masked by eroded phyllosilicates. Consequently, rivers draining clay plains sensitive to erosion, like those in SW Finland, have generally high background metal concentrations due to erosion. Thus, relying on only semi-dissolved (<0.45 μm) concentrations obtained in routine monitoring, or geochemical modelling based on such data, can lead to a great overestimation of the water toxicity in this environment. The potentially toxic elements that are of concern in AS soil areas will ultimately be precipitated in the recipient estuary or sea, where the acidic metalrich river water will gradually be diluted/neutralised with brackish seawater. Along such a rising pH gradient Al, Cu and U will precipitate first together with organic matter closest to the river mouth. Manganese is relatively persistent in solution and, thus, precipitates further down the estuary as Mn oxides together with elements such as Ba, Cd, Co, Cu and Ni. Iron oxides, on the contrary, are not important scavengers of metals in the estuary, they are predicted to be associated only with As and PO4.
Resumo:
Carbon Fibre Reinforced Carbon (CFRC) Composites are increasing their applications due to their high strength and Youngs Modulus at high temperatures in inert atmosphere. Although much work has been done on processing and structure and properties relationship, few studies have addressed the modelling of mechanical properties. This work is divided in two parts. In the first part, a modelling of mechanical properties was carried out for two bi-directional composites using a model based on the Bernoulli-Euler theory for symmetric laminated beams. In the second part, acoustic emission (AE) was used as an auxiliary technique for monitoring the failure process of the composites. Differences in fracture behaviour are reflected in patterns of AE.
Resumo:
A three dimensional nonlinear viscoelastic constitutive model for the solid propellant is developed. In their earlier work, the authors have developed an isotropic constitutive model and verified it for one dimensional case. In the present work, the validity of the model is extended to three-dimensional cases. Large deformation, dewetting and cyclic loading effects are treated as the main sources of nonlinear behavior of the solid propellant. Viscoelastic dewetting criteria is used and the softening of the solid propellant due to dewetting is treated by the modulus decrease. The nonlinearities during cyclic loading are accounted for by the functions of the octahedral shear strain measure. The constitutive equation is implemented into a finite element code for the analysis of propellant grains. A commercial finite element package ABAQUS is used for the analysis and the model is introduced into the code through a user subroutine. The model is evaluated with different loading conditions and the predicted values are in good agreement with the measured ones. The resulting model applied to analyze a solid propellant grain for the thermal cycling load.
Resumo:
The main objective of this work is to analyze the importance of the gas-solid interface transfer of the kinetic energy of the turbulent motion on the accuracy of prediction of the fluid dynamic of Circulating Fluidized Bed (CFB) reactors. CFB reactors are used in a variety of industrial applications related to combustion, incineration and catalytic cracking. In this work a two-dimensional fluid dynamic model for gas-particle flow has been used to compute the porosity, the pressure, and the velocity fields of both phases in 2-D axisymmetrical cylindrical co-ordinates. The fluid dynamic model is based on the two fluid model approach in which both phases are considered to be continuous and fully interpenetrating. CFB processes are essentially turbulent. The model of effective stress on each phase is that of a Newtonian fluid, where the effective gas viscosity was calculated from the standard k-epsilon turbulence model and the transport coefficients of the particulate phase were calculated from the kinetic theory of granular flow (KTGF). This work shows that the turbulence transfer between the phases is very important for a better representation of the fluid dynamics of CFB reactors, especially for systems with internal recirculation and high gradients of particle concentration. Two systems with different characteristics were analyzed. The results were compared with experimental data available in the literature. The results were obtained by using a computer code developed by the authors. The finite volume method with collocated grid, the hybrid interpolation scheme, the false time step strategy and SIMPLEC (Semi-Implicit Method for Pressure Linked Equations - Consistent) algorithm were used to obtain the numerical solution.