977 resultados para Mindlin Pseudospectral Plate Element, Chebyshev Polynomial, Integration Scheme


Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]We present a new strategy for constructing spline spaces over hierarchical T-meshes with quad- and octree subdivision scheme. The proposed technique includes some simple rules for inferring local knot vectors to define C 2 -continuous cubic tensor product spline blending functions. Our conjecture is that these rules allow to obtain, for a given T-mesh, a set of linearly independent spline functions with the property that spaces spanned by nested T-meshes are also nested, and therefore, the functions can reproduce cubic polynomials. In order to span spaces with these properties applying the proposed rules, the T-mesh should fulfill the only requirement of being a 0- balanced mesh...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]This paper shows a finite element method for pollutant transport with several pollutant sources. An Eulerian convection–diffusion–reaction model to simulate the pollutant dispersion is used. The discretization of the different sources allows to impose the emissions as boundary conditions. The Eulerian description can deal with the coupling of several plumes. An adaptive stabilized finite element formulation, specifically Least-Squares, with a Crank-Nicolson temporal integration is proposed to solve the problem. An splitting scheme has been used to treat separately the transport and the reaction. A mass-consistent model has been used to compute the wind field of the problem…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stress recovery techniques have been an active research topic in the last few years since, in 1987, Zienkiewicz and Zhu proposed a procedure called Superconvergent Patch Recovery (SPR). This procedure is a last-squares fit of stresses at super-convergent points over patches of elements and it leads to enhanced stress fields that can be used for evaluating finite element discretization errors. In subsequent years, numerous improved forms of this procedure have been proposed attempting to add equilibrium constraints to improve its performances. Later, another superconvergent technique, called Recovery by Equilibrium in Patches (REP), has been proposed. In this case the idea is to impose equilibrium in a weak form over patches and solve the resultant equations by a last-square scheme. In recent years another procedure, based on minimization of complementary energy, called Recovery by Compatibility in Patches (RCP) has been proposed in. This procedure, in many ways, can be seen as the dual form of REP as it substantially imposes compatibility in a weak form among a set of self-equilibrated stress fields. In this thesis a new insight in RCP is presented and the procedure is improved aiming at obtaining convergent second order derivatives of the stress resultants. In order to achieve this result, two different strategies and their combination have been tested. The first one is to consider larger patches in the spirit of what proposed in [4] and the second one is to perform a second recovery on the recovered stresses. Some numerical tests in plane stress conditions are presented, showing the effectiveness of these procedures. Afterwards, a new recovery technique called Last Square Displacements (LSD) is introduced. This new procedure is based on last square interpolation of nodal displacements resulting from the finite element solution. In fact, it has been observed that the major part of the error affecting stress resultants is introduced when shape functions are derived in order to obtain strains components from displacements. This procedure shows to be ultraconvergent and is extremely cost effective, as it needs in input only nodal displacements directly coming from finite element solution, avoiding any other post-processing in order to obtain stress resultants using the traditional method. Numerical tests in plane stress conditions are than presented showing that the procedure is ultraconvergent and leads to convergent first and second order derivatives of stress resultants. In the end, transverse stress profiles reconstruction using First-order Shear Deformation Theory for laminated plates and three dimensional equilibrium equations is presented. It can be seen that accuracy of this reconstruction depends on accuracy of first and second derivatives of stress resultants, which is not guaranteed by most of available low order plate finite elements. RCP and LSD procedures are than used to compute convergent first and second order derivatives of stress resultants ensuring convergence of reconstructed transverse shear and normal stress profiles respectively. Numerical tests are presented and discussed showing the effectiveness of both procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die Resonanzionisations-Massenspektrometrie (RIMS) ist sowohl für spektroskopische Untersuchungen seltener Isotope als auch für den Ultraspurennachweis langlebiger radioaktiver Elemente einsetzbar. Durch die mehrstufige resonante Anregung atomarer Energieniveaus mit anschließender Ionisation mit Laserlicht wird eine sehr hohe Elementselektivität und Ionisationseffizienz erreicht. Der nachfolgende massenselektive Ionennachweis liefert eine gute Isotopenselektivität zusammen mit einer effektiven Untergrundunterdrückung. Ein wichtiger Bestandteil der RIMS-Apparatur ist ein zuverlässig arbeitendes, leistungsstarkes Lasersystem für die Resonanzionisation. Im Rahmen dieser Arbeit wurde ein von einem hochrepetierenden Nd:YAG-Laser gepumptes, aus drei Titan-Saphir-Lasern bestehendes System fertig aufgebaut und in den Routinebetrieb überführt. Die Titan-Saphir-Laser liefern im Durchstimmbereich von 730 - 880 nm eine mittlere Leistung von bis zu 3 W pro Laser bei einer Linienbreite von 2 - 3 GHz. Sie lassen sich computergesteuert in ihren Wellenlängen durchstimmen. Die mittels Resonanzionisation erzeugten Ionen werden dann in einem Flugzeit-Massenspektrometer entsprechend ihrer Masse aufgetrennt und mit einem Kanalplattendetektor nachgewiesen.Als Voraussetzung für die isotopenselektive Ultraspurenanalyse von Plutonium wurden mit diesem Lasersystem die Isotopieverschiebungen eines effizienten, dreistufigen Anregungsschema für Plutonium bestimmt. Die Laserleistungen reichen zur vielfachen Sättigung der ersten beiden Anregungsschritte und zur zweifachen Sättigung des dritten Anregungsschritts aus.Außerdem wurden die Ionisationsenergien von Pu-239 und Pu-244 zur Untersuchung ihrer Isotopenabhängigkeit bestimmt. Die beiden Ionisationsenergien sind im Rahmen der erreichten Genauigkeit bei einem Meßwert von IP239-IP244 = 0,24(82) cm^-1 gleich.Die Nachweiseffizienz der RIMS-Apparatur für Plutonium wurde in Effizienzmessungen zu 10^-5 bestimmt. Durch die gute Untergrundunterdrückung ergab sich daraus eine Nachweisgrenze von 10^6 Atomen bei der Messung eines Plutoniumisotops. Die Bestimmung der Isotopenverhältnisse von Proben mit einer zertifizierten Isotopenzusammensetzung lieferte eine gute Übereinstimmung der Meßwerte mit den angegebenen Zusammensetzungen.Die RIMS-Apparatur wurde zur Bestimmung des Gehalts und der Isotopenzusammensetzung von Plutonium in Meerwasser- und Staubproben eingesetzt.Auf Grund der Isotopenzusammensetzung konnte gezeigt werden, daß das Plutonium bei den meisten Proben aus dem Fallout von oberirdischen Kernwaffentests stammte. Des weiteren wurde Plutonium in Urinproben bestimmt. Die Nachweisgrenzen lagen bei diesen Umweltproben bei 10^6 bis 10^7 Atomen Plutonium und damit um zwei Größenordnungen niedriger als die Nachweisgrenze für Pu-239 bei der alpha-Spektroskopie, der Standardmethode für den Plutoniumnachweis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Im tcdA-Gen des Clostridium difficile Stammes C34 wurde eine Insertion mit einer Größe von 1975 bp lokalisiert. Der als CdISt1 bezeichneten Insertion konnten charakteristische Merkmale von Gruppe I Introns und von Insertionselementen zugewiesen werden. Dem im 5’ Bereich gelegenen Anteil ließen sich die Intron-spezifischen Eigenschaften zuordnen, im 3’ Anteil wurden zwei offene Leseraster gefunden, die hohe Homologien zu Transposasen der IS605 Familie hatten. Funktionelle Analysen belegten die Spleißaktivität des chimären Ribozymes. CdISt1 konnte in mehren Kopien in allen untersuchten C. difficile Stämmen nachgewiesen werden. In anderen clostridialen Spezies konnte das Gruppe I Intron bislang nicht vorgefunden werden. Der Integrationsort in C. difficile war in allen untersuchten Fällen immer ein offenes Leseraster. Bislang waren Gruppe I Introns noch nie in bakteriellen offenen Leserastern beschrieben worden. Es kann angenommen werden, dass der chimäre Aufbau des Ribozymes die Integration in bakterielle offene Leseraster ermöglicht. Dabei wäre für die Spleißaktivität der Gruppe I Intron Anteil maßgeblich, die Mobilität würde über den IS Element Anteil vermittelt. Im Rahmen der Dissertationsarbeit konnten erste experimentelle Hinweise erbracht werden, dass das chimäre Ribozym an der evolution clostridialer Proteine beteiligt sein kann, wovon seinen Wirt C. difficile entsprechend profitieren würde.An insertion of 1975 bp is situated in the tcdA-gene of Clostridium difficile strain C34. The insertion was designated as CdISt1 and it had characteristics of group I introns and insertion elements. The group I characteristcs could be found in the 5’ area of the genetic element, in the 3’ area two open reading frames were located with high homologies to transposases of the IS605 family. Functional studies could proof the splicing activity of the ribozyme. CdISt1 could be found in several copies in all C. difficile strains examined so far. It was absent in other examined clostridial species. In all cases, the integration site in C. difficile was an open reading frame. Up to now, group I introns never were discovered in bacterial open reading frames. It can be assumed that the chimeric characteristics of the ribozyme permit an integration in bacterial open reading frames. The group I intron part would be responsible of the splicing activity, the IS element part could mediate the mobility of the genetic element. First experimental evidences point to a possible involvement of the chimeric ribozyme in the evolution of clostridial proteins, so the host C. difficile could benefit from its presence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different tools have been used to set up and adopt the model for the fulfillment of the objective of this research. 1. The Model The base model that has been used is the Analytical Hierarchy Process (AHP) adapted with the aim to perform a Benefit Cost Analysis. The AHP developed by Thomas Saaty is a multicriteria decision - making technique which decomposes a complex problem into a hierarchy. It is used to derive ratio scales from both discreet and continuous paired comparisons in multilevel hierarchic structures. These comparisons may be taken from actual measurements or from a fundamental scale that reflects the relative strength of preferences and feelings. 2. Tools and methods 2.1. The Expert Choice Software The software Expert Choice is a tool that allows each operator to easily implement the AHP model in every stage of the problem. 2.2. Personal Interviews to the farms For this research, the farms of the region Emilia Romagna certified EMAS have been detected. Information has been given by EMAS center in Wien. Personal interviews have been carried out to each farm in order to have a complete and realistic judgment of each criteria of the hierarchy. 2.3. Questionnaire A supporting questionnaire has also been delivered and used for the interviews . 3. Elaboration of the data After data collection, the data elaboration has taken place. The software support Expert Choice has been used . 4. Results of the Analysis The result of the figures above (vedere altro documento) gives a series of numbers which are fractions of the unit. This has to be interpreted as the relative contribution of each element to the fulfillment of the relative objective. So calculating the Benefits/costs ratio for each alternative the following will be obtained: Alternative One: Implement EMAS Benefits ratio: 0, 877 Costs ratio: 0, 815 Benfit/Cost ratio: 0,877/0,815=1,08 Alternative Two: Not Implement EMAS Benefits ratio: 0,123 Costs ration: 0,185 Benefit/Cost ratio: 0,123/0,185=0,66 As stated above, the alternative with the highest ratio will be the best solution for the organization. This means that the research carried out and the model implemented suggests that EMAS adoption in the agricultural sector is the best alternative. It has to be noted that the ratio is 1,08 which is a relatively low positive value. This shows the fragility of this conclusion and suggests a careful exam of the benefits and costs for each farm before adopting the scheme. On the other part, the result needs to be taken in consideration by the policy makers in order to enhance their intervention regarding the scheme adoption on the agricultural sector. According to the AHP elaboration of judgments we have the following main considerations on Benefits: - Legal compliance seems to be the most important benefit for the agricultural sector since its rank is 0,471 - The next two most important benefits are Improved internal organization (ranking 0,230) followed by Competitive advantage (ranking 0, 221) mostly due to the sub-element Improved image (ranking 0,743) Finally, even though Incentives are not ranked among the most important elements, the financial ones seem to have been decisive on the decision making process. According to the AHP elaboration of judgments we have the following main considerations on Costs: - External costs seem to be largely more important than the internal ones (ranking 0, 857 over 0,143) suggesting that Emas costs over consultancy and verification remain the biggest obstacle. - The implementation of the EMS is the most challenging element regarding the internal costs (ranking 0,750).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die Ränder des Labrador Meeres wurden während des späten Neoproterozoikums intensiv von karbonatreichen silikatischen Schmelzen durchsetzt. Diese Schmelzen bildeted sich bei Drucken zwischen ca. 4-6 GPa (ca. 120-180 km Tiefe) an der Basis der kontinentalen Mantel-Lithosphäre. Diese Magmengenerierung steht in zeitlichem und räumlichem Zusammenhang mit kontinentalen Extensionsprozessen, welche zu beiden Seiten des sich öffnenden Iapetus-Ozeans auftraten.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecules are the smallest possible elements for electronic devices, with active elements for such devices typically a few Angstroms in footprint area. Owing to the possibility of producing ultrahigh density devices, tremendous effort has been invested in producing electronic junctions by using various types of molecules. The major issues for molecular electronics include (1) developing an effective scheme to connect molecules with the present micro- and nano-technology, (2) increasing the lifetime and stabilities of the devices, and (3) increasing their performance in comparison to the state-of-the-art devices. In this work, we attempt to use carbon nanotubes (CNTs) as the interconnecting nanoelectrodes between molecules and microelectrodes. The ultimate goal is to use two individual CNTs to sandwich molecules in a cross-bar configuration while having these CNTs connected with microelectrodes such that the junction displays the electronic character of the molecule chosen. We have successfully developed an effective scheme to connect molecules with CNTs, which is scalable to arrays of molecular electronic devices. To realize this far reaching goal, the following technical topics have been investigated. 1. Synthesis of multi-walled carbon nanotubes (MWCNTs) by thermal chemical vapor deposition (T-CVD) and plasma-enhanced chemical vapor deposition (PECVD) techniques (Chapter 3). We have evaluated the potential use of tubular and bamboo-like MWCNTs grown by T-CVD and PE-CVD in terms of their structural properties. 2. Horizontal dispersion of MWCNTs with and without surfactants, and the integration of MWCNTs to microelectrodes using deposition by dielectrophoresis (DEP) (Chapter 4). We have systematically studied the use of surfactant molecules to disperse and horizontally align MWCNTs on substrates. In addition, DEP is shown to produce impurityfree placement of MWCNTs, forming connections between microelectrodes. We demonstrate the deposition density is tunable by both AC field strength and AC field frequency. 3. Etching of MWCNTs for the impurity-free nanoelectrodes (Chapter 5). We show that the residual Ni catalyst on MWCNTs can be removed by acid etching; the tip removal and collapsing of tubes into pyramids enhances the stability of field emission from the tube arrays. The acid-etching process can be used to functionalize the MWCNTs, which was used to make our initial CNT-nanoelectrode glucose sensors. Finally, lessons learned trying to perform spectroscopic analysis of the functionalized MWCNTs were vital for designing our final devices. 4. Molecular junction design and electrochemical synthesis of biphenyl molecules on carbon microelectrodes for all-carbon molecular devices (Chapter 6). Utilizing the experience gained on the work done so far, our final device design is described. We demonstrate the capability of preparing patterned glassy carbon films to serve as the bottom electrode in the new geometry. However, the molecular switching behavior of biphenyl was not observed by scanning tunneling microscopy (STM), mercury drop or fabricated glassy carbon/biphenyl/MWCNT junctions. Either the density of these molecules is not optimum for effective integration of devices using MWCNTs as the nanoelectrodes, or an electroactive contaminant was reduced instead of the ionic biphenyl species. 5. Self-assembly of octadecanethiol (ODT) molecules on gold microelectrodes for functional molecular devices (Chapter 7). We have realized an effective scheme to produce Au/ODT/MWCNT junctions by spanning MWCNTs across ODT-functionalized microelectrodes. A percentage of the resulting junctions retain the expected character of an ODT monolayer. While the process is not yet optimized, our successful junctions show that molecular electronic devices can be fabricated using simple processes such as photolithography, self-assembled monolayers and dielectrophoresis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this doctoral research is to investigate the internal frost damage due to crystallization pore pressure in porous cement-based materials by developing computational and experimental characterization tools. As an essential component of the U.S. infrastructure system, the durability of concrete has significant impact on maintenance costs. In cold climates, freeze-thaw damage is a major issue affecting the durability of concrete. The deleterious effects of the freeze-thaw cycle depend on the microscale characteristics of concrete such as the pore sizes and the pore distribution, as well as the environmental conditions. Recent theories attribute internal frost damage of concrete is caused by crystallization pore pressure in the cold environment. The pore structures have significant impact on freeze-thaw durability of cement/concrete samples. The scanning electron microscope (SEM) and transmission X-ray microscopy (TXM) techniques were applied to characterize freeze-thaw damage within pore structure. In the microscale pore system, the crystallization pressures at sub-cooling temperatures were calculated using interface energy balance with thermodynamic analysis. The multi-phase Extended Finite Element Modeling (XFEM) and bilinear Cohesive Zone Modeling (CZM) were developed to simulate the internal frost damage of heterogeneous cement-based material samples. The fracture simulation with these two techniques were validated by comparing the predicted fracture behavior with the captured damage from compact tension (CT) and single-edge notched beam (SEB) bending tests. The study applied the developed computational tools to simulate the internal frost damage caused by ice crystallization with the two dimensional (2-D) SEM and three dimensional (3-D) reconstructed SEM and TXM digital samples. The pore pressure calculated from thermodynamic analysis was input for model simulation. The 2-D and 3-D bilinear CZM predicted the crack initiation and propagation within cement paste microstructure. The favorably predicted crack paths in concrete/cement samples indicate the developed bilinear CZM techniques have the ability to capture crack nucleation and propagation in cement-based material samples with multiphase and associated interface. By comparing the computational prediction with the actual damaged samples, it also indicates that the ice crystallization pressure is the main mechanism for the internal frost damage in cementitious materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Pacaya volcanic complex is part of the Central American volcanic arc, which is associated with the subduction of the Cocos tectonic plate under the Caribbean plate. Located 30 km south of Guatemala City, Pacaya is situated on the southern rim of the Amatitlan Caldera. It is the largest post-caldera volcano, and has been one of Central America’s most active volcanoes over the last 500 years. Between 400 and 2000 years B.P, the Pacaya volcano had experienced a huge collapse, which resulted in the formation of horseshoe-shaped scarp that is still visible. In the recent years, several smaller collapses have been associated with the activity of the volcano (in 1961 and 2010) affecting its northwestern flanks, which are likely to be induced by the local and regional stress changes. The similar orientation of dry and volcanic fissures and the distribution of new vents would likely explain the reactivation of the pre-existing stress configuration responsible for the old-collapse. This paper presents the first stability analysis of the Pacaya volcanic flank. The inputs for the geological and geotechnical models were defined based on the stratigraphical, lithological, structural data, and material properties obtained from field survey and lab tests. According to the mechanical characteristics, three lithotechnical units were defined: Lava, Lava-Breccia and Breccia-Lava. The Hoek and Brown’s failure criterion was applied for each lithotechnical unit and the rock mass friction angle, apparent cohesion, and strength and deformation characteristics were computed in a specified stress range. Further, the stability of the volcano was evaluated by two-dimensional analysis performed by Limit Equilibrium (LEM, ROCSCIENCE) and Finite Element Method (FEM, PHASE 2 7.0). The stability analysis mainly focused on the modern Pacaya volcano built inside the collapse amphitheatre of “Old Pacaya”. The volcanic instability was assessed based on the variability of safety factor using deterministic, sensitivity, and probabilistic analysis considering the gravitational instability and the effects of external forces such as magma pressure and seismicity as potential triggering mechanisms of lateral collapse. The preliminary results from the analysis provide two insights: first, the least stable sector is on the south-western flank of the volcano; second, the lowest safety factor value suggests that the edifice is stable under gravity alone, and the external triggering mechanism can represent a likely destabilizing factor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Eosinophil differentiation, activation, and survival are largely regulated by IL-5. IL-5-mediated transmembrane signal transduction involves both Lyn-mitogen-activated protein kinases and Janus kinase 2-signal transducer and activator of transcription pathways. OBJECTIVE: We sought to determine whether additional signaling molecules/pathways are critically involved in IL-5-mediated eosinophil survival. METHODS: Eosinophil survival and apoptosis were measured in the presence and absence of IL-5 and defined pharmacologic inhibitors in vitro. The specific role of the serine/threonine kinase proviral integration site for Moloney murine leukemia virus (Pim) 1 was tested by using HIV-transactivator of transcription fusion proteins containing wild-type Pim-1 or a dominant-negative form of Pim-1. The expression of Pim-1 in eosinophils was analyzed by means of immunoblotting and immunofluorescence. RESULTS: Although pharmacologic inhibition of phosphatidylinositol-3 kinase (PI3K) by LY294002, wortmannin, or the selective PI3K p110delta isoform inhibitor IC87114 was successful in each case, only LY294002 blocked increased IL-5-mediated eosinophil survival. This suggested that LY294002 inhibited another kinase that is critically involved in this process in addition to PI3K. Indeed, Pim-1 was rapidly and strongly expressed in eosinophils after IL-5 stimulation in vitro and readily detected in eosinophils under inflammatory conditions in vivo. Moreover, by using specific protein transfer, we identified Pim-1 as a critical element in IL-5-mediated antiapoptotic signaling in eosinophils. CONCLUSIONS: Pim-1, but not PI3K, plays a major role in IL-5-mediated antiapoptotic signaling in eosinophils.