941 resultados para pacs: simulation techniques


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In der vorliegenden Arbeit werden verschiedene Wassermodelle in sogenannten Multiskalen-Computersimulationen mit zwei Auflösungen untersucht, in atomistischer Auflösung und in einer vergröberten Auflösung, die als "coarse-grained" bezeichnet wird. In der atomistischen Auflösung wird ein Wassermolekül, entsprechend seiner chemischen Struktur, durch drei Atome beschrieben, im Gegensatz dazu wird ein Molekül in der coarse-grained Auflösung durch eine Kugel dargestellt.rnrnDie coarse-grained Modelle, die in dieser Arbeit vorgestellt werden, werden mit verschiedenen coarse-graining Methoden entwickelt. Hierbei kommen hauptsächlich die "iterative Boltzmann Inversion" und die "iterative Monte Carlo Inversion" zum Einsatz. Beides sind struktur-basierte Ansätze, die darauf abzielen bestimmte strukturelle Eigenschaften, wie etwa die Paarverteilungsfunktionen, des zugrundeliegenden atomistischen Systems zu reproduzieren. Zur automatisierten Anwendung dieser Methoden wurde das Softwarepaket "Versatile Object-oriented Toolkit for Coarse-Graining Applications" (VOTCA) entwickelt.rnrnEs wird untersucht, in welchem Maße coarse-grained Modelle mehrere Eigenschaftenrndes zugrundeliegenden atomistischen Modells gleichzeitig reproduzieren können, z.B. thermodynamische Eigenschaften wie Druck und Kompressibilität oder strukturelle Eigenschaften, die nicht zur Modellbildung verwendet wurden, z.B. das tetraedrische Packungsverhalten, welches für viele spezielle Eigenschaft von Wasser verantwortlich ist.rnrnMit Hilfe des "Adaptive Resolution Schemes" werden beide Auflösungen in einer Simulation kombiniert. Dabei profitiert man von den Vorteilen beider Modelle:rnVon der detaillierten Darstellung eines räumlich kleinen Bereichs in atomistischer Auflösung und von der rechnerischen Effizienz des coarse-grained Modells, die den Bereich simulierbarer Zeit- und Längenskalen vergrössert.rnrnIn diesen Simulationen kann der Einfluss des Wasserstoffbrückenbindungsnetzwerks auf die Hydration von Fullerenen untersucht werden. Es zeigt sich, dass die Struktur der Wassermoleküle an der Oberfläche hauptsächlich von der Art der Wechselwirkung zwischen dem Fulleren und Wasser und weniger von dem Wasserstoffbrückenbindungsnetzwerk dominiert wird.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we focus on optimization and simulation techniques applied to solve strategic, tactical and operational problems rising in the healthcare sector. At first we present three applications to Emilia-Romagna Public Health System (SSR) developed in collaboration with Agenzia Sanitaria e Sociale dell'Emilia-Romagna (ASSR), a regional center for innovation and improvement in health. Agenzia launched a strategic campaign aimed at introducing Operations Research techniques as decision making tools to support technological and organizational innovations. The three applications focus on forecast and fund allocation of medical specialty positions, breast screening program extension and operating theater planning. The case studies exploit the potential of combinatorial optimization, discrete event simulation and system dynamics techniques to solve resource constrained problem arising within Emilia-Romagna territory. We then present an application in collaboration with Dipartimento di Epidemiologia del Lazio that focuses on population demand of service allocation to regional emergency departments. Finally, a simulation-optimization approach, developed in collaboration with INESC TECH center of Porto, to evaluate matching policies for the kidney exchange problem is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fehlende Grundkenntnisse in der Mathematik zählen zu den größten Hindernissen für einen erfolgreichen Start in ein Hochschulstudium. Studienanfänger in einem MINT-Studium bringen inzwischen deutlich unterschiedliche Vorrausetzungen mit: „Mathe-Angst“ gilt als typisches Phänomen und der Übergang in ein selbstbestimmtes Lernverhalten stellt eine große Herausforderung dar. Diese Fall-Studie beschreibt, wie mit Hilfe einer Mathe-App bereits zu Beginn des Studiums aktives Lernen unterstützt und selbstbestimmtes Lernen eingeübt werden kann. Das neue Kurskonzept mit App-Unterstützung stößt an der Hochschule Offenburg auf breite Akzeptanz. Der mobile BYOD-Ansatz ermöglicht Lern-Szenarien, die über PC- bzw.- Laptop-gebundene eLearning-Lösungen nicht realisierbar sind. Der Inhalt des MassMatics-Vorbereitungskurs orientiert sich am Mindestanforderungskatalog des cosh-Arbeitskreises für den Übergang Schule-Hochschule. In der Zwischenzeit wurde der App-gestützte Kurs mit seinen über 500 Aufgaben von mehr als 1000 Studierenden besucht.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a conceptual prototype model of a focal plane array unit for the STEAMR instrument, highlighting the challenges presented by the required high relative beam proximity of the instrument and focus on how edge-diffraction effects contribute to the array's performance. The analysis was carried out as a comparative process using both PO & PTD and MoM techniques. We first highlight general differences between these computational techniques, with the discussion focusing on diffractive edge effects for near-field imaging reflectors with high truncation. We then present the results of in-depth modeling analyses of the STEAMR focal plane array followed by near-field antenna measurements of a breadboard model of the array. The results of these near-field measurements agree well with both simulation techniques although MoM shows slightly higher complex beam coupling to the measurements than PO & PTD.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A broadband primary standard for thermal noise measurements is presented and its thermal and electromagnetic behavior is analyzed by means of analytical and numerical simulation techniques. It consists of a broadband termination connected to a 3.5mm coaxial airline partially immersed in liquid Nitrogen. The main innovative part of the device is the thermal bead between inner and outer conductors, designed for obtaining a proper thermal contact and to keep low both its contribution to the total thermal noise and its reflectivity. A sensitivity analysis is realized in order to fix the manufacturing tolerances for a proper performance in the range 10MHz¿26.5GHz.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In multi-attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative?s intensity of dominance, known as dominance measuring methods. Different dominancemeasuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we useMonte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions orweights represented by fuzzy numbers.Moreover, dominance measuringmethod performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El hormigón es uno de los materiales de construcción más empleados en la actualidad debido a sus buenas prestaciones mecánicas, moldeabilidad y economía de obtención, entre otras ventajas. Es bien sabido que tiene una buena resistencia a compresión y una baja resistencia a tracción, por lo que se arma con barras de acero para formar el hormigón armado, material que se ha convertido por méritos propios en la solución constructiva más importante de nuestra época. A pesar de ser un material profusamente utilizado, hay aspectos del comportamiento del hormigón que todavía no son completamente conocidos, como es el caso de su respuesta ante los efectos de una explosión. Este es un campo de especial relevancia, debido a que los eventos, tanto intencionados como accidentales, en los que una estructura se ve sometida a una explosión son, por desgracia, relativamente frecuentes. La solicitación de una estructura ante una explosión se produce por el impacto sobre la misma de la onda de presión generada en la detonación. La aplicación de esta carga sobre la estructura es muy rápida y de muy corta duración. Este tipo de acciones se denominan cargas impulsivas, y pueden ser hasta cuatro órdenes de magnitud más rápidas que las cargas dinámicas impuestas por un terremoto. En consecuencia, no es de extrañar que sus efectos sobre las estructuras y sus materiales sean muy distintos que las que producen las cargas habitualmente consideradas en ingeniería. En la presente tesis doctoral se profundiza en el conocimiento del comportamiento material del hormigón sometido a explosiones. Para ello, es crucial contar con resultados experimentales de estructuras de hormigón sometidas a explosiones. Este tipo de resultados es difícil de encontrar en la literatura científica, ya que estos ensayos han sido tradicionalmente llevados a cabo en el ámbito militar y los resultados obtenidos no son de dominio público. Por otra parte, en las campañas experimentales con explosiones llevadas a cabo por instituciones civiles el elevado coste de acceso a explosivos y a campos de prueba adecuados no permite la realización de ensayos con un elevado número de muestras. Por este motivo, la dispersión experimental no es habitualmente controlada. Sin embargo, en elementos de hormigón armado sometidos a explosiones, la dispersión experimental es muy acusada, en primer lugar, por la propia heterogeneidad del hormigón, y en segundo, por la dificultad inherente a la realización de ensayos con explosiones, por motivos tales como dificultades en las condiciones de contorno, variabilidad del explosivo, o incluso cambios en las condiciones atmosféricas. Para paliar estos inconvenientes, en esta tesis doctoral se ha diseñado un novedoso dispositivo que permite ensayar hasta cuatro losas de hormigón bajo la misma detonación, lo que además de proporcionar un número de muestras estadísticamente representativo, supone un importante ahorro de costes. Con este dispositivo se han ensayado 28 losas de hormigón, tanto armadas como en masa, de dos dosificaciones distintas. Pero además de contar con datos experimentales, también es importante disponer de herramientas de cálculo para el análisis y diseño de estructuras sometidas a explosiones. Aunque existen diversos métodos analíticos, hoy por hoy las técnicas de simulación numérica suponen la alternativa más avanzada y versátil para el cálculo de elementos estructurales sometidos a cargas impulsivas. Sin embargo, para obtener resultados fiables es crucial contar con modelos constitutivos de material que tengan en cuenta los parámetros que gobiernan el comportamiento para el caso de carga en estudio. En este sentido, cabe destacar que la mayoría de los modelos constitutivos desarrollados para el hormigón a altas velocidades de deformación proceden del ámbito balístico, donde dominan las grandes tensiones de compresión en el entorno local de la zona afectada por el impacto. En el caso de los elementos de hormigón sometidos a explosiones, las tensiones de compresión son mucho más moderadas, siendo las tensiones de tracción generalmente las causantes de la rotura del material. En esta tesis doctoral se analiza la validez de algunos de los modelos disponibles, confirmando que los parámetros que gobiernan el fallo de las losas de hormigón armado ante explosiones son la resistencia a tracción y su ablandamiento tras rotura. En base a los resultados anteriores se ha desarrollado un modelo constitutivo para el hormigón ante altas velocidades de deformación, que sólo tiene en cuenta la rotura por tracción. Este modelo parte del de fisura cohesiva embebida con discontinuidad fuerte, desarrollado por Planas y Sancho, que ha demostrado su capacidad en la predicción de la rotura a tracción de elementos de hormigón en masa. El modelo ha sido modificado para su implementación en el programa comercial de integración explícita LS-DYNA, utilizando elementos finitos hexaédricos e incorporando la dependencia de la velocidad de deformación para permitir su utilización en el ámbito dinámico. El modelo es estrictamente local y no requiere de remallado ni conocer previamente la trayectoria de la fisura. Este modelo constitutivo ha sido utilizado para simular dos campañas experimentales, probando la hipótesis de que el fallo de elementos de hormigón ante explosiones está gobernado por el comportamiento a tracción, siendo de especial relevancia el ablandamiento del hormigón. Concrete is nowadays one of the most widely used building materials because of its good mechanical properties, moldability and production economy, among other advantages. As it is known, it has high compressive and low tensile strengths and for this reason it is reinforced with steel bars to form reinforced concrete, a material that has become the most important constructive solution of our time. Despite being such a widely used material, there are some aspects of concrete performance that are not yet fully understood, as it is the case of its response to the effects of an explosion. This is a topic of particular relevance because the events, both intentional and accidental, in which a structure is subjected to an explosion are, unfortunately, relatively common. The loading of a structure due to an explosive event occurs due to the impact of the pressure shock wave generated in the detonation. The application of this load on the structure is very fast and of very short duration. Such actions are called impulsive loads, and can be up to four orders of magnitude faster than the dynamic loads imposed by an earthquake. Consequently, it is not surprising that their effects on structures and materials are very different than those that cause the loads usually considered in engineering. This thesis broadens the knowledge about the material behavior of concrete subjected to explosions. To that end, it is crucial to have experimental results of concrete structures subjected to explosions. These types of results are difficult to find in the scientific literature, as these tests have traditionally been carried out by armies of different countries and the results obtained are classified. Moreover, in experimental campaigns with explosives conducted by civil institutions the high cost of accessing explosives and the lack of proper test fields does not allow for the testing of a large number of samples. For this reason, the experimental scatter is usually not controlled. However, in reinforced concrete elements subjected to explosions the experimental dispersion is very pronounced. First, due to the heterogeneity of concrete, and secondly, because of the difficulty inherent to testing with explosions, for reasons such as difficulties in the boundary conditions, variability of the explosive, or even atmospheric changes. To overcome these drawbacks, in this thesis we have designed a novel device that allows for testing up to four concrete slabs under the same detonation, which apart from providing a statistically representative number of samples, represents a significant saving in costs. A number of 28 slabs were tested using this device. The slabs were both reinforced and plain concrete, and two different concrete mixes were used. Besides having experimental data, it is also important to have computational tools for the analysis and design of structures subjected to explosions. Despite the existence of several analytical methods, numerical simulation techniques nowadays represent the most advanced and versatile alternative for the assessment of structural elements subjected to impulsive loading. However, to obtain reliable results it is crucial to have material constitutive models that take into account the parameters that govern the behavior for the load case under study. In this regard it is noteworthy that most of the developed constitutive models for concrete at high strain rates arise from the ballistic field, dominated by large compressive stresses in the local environment of the area affected by the impact. In the case of concrete elements subjected to an explosion, the compressive stresses are much more moderate, while tensile stresses usually cause material failure. This thesis discusses the validity of some of the available models, confirming that the parameters governing the failure of reinforced concrete slabs subjected to blast are the tensile strength and softening behaviour after failure. Based on these results we have developed a constitutive model for concrete at high strain rates, which only takes into account the ultimate tensile strength. This model is based on the embedded Cohesive Crack Model with Strong Discontinuity Approach developed by Planas and Sancho, which has proved its ability in predicting the tensile fracture of plain concrete elements. The model has been modified for its implementation in the commercial explicit integration program LS-DYNA, using hexahedral finite elements and incorporating the dependence of the strain rate, to allow for its use in dynamic domain. The model is strictly local and does not require remeshing nor prior knowledge of the crack path. This constitutive model has been used to simulate two experimental campaigns, confirming the hypothesis that the failure of concrete elements subjected to explosions is governed by their tensile response, being of particular relevance the softening behavior of concrete.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A broadband primary standard for thermal noise measurements is presented and its thermal and electromagnetic behaviour is analysed by means of a novel hybrid analytical?numerical simulation methodology. The standard consists of a broadband termination connected to a 3.5mm coaxial airline partially immersed in liquid nitrogen and is designed in order to obtain a low reflectivity and a low uncertainty in the noise temperature. A detailed sensitivity analysis is made in order to highlight the critical characteristics that mostly affect the uncertainty in the noise temperature, and also to determine the manufacturing and operation tolerances for a proper performance in the range 10MHz to 26.5 GHz. Aspects such as the thermal bead design, the level of liquid nitrogen or the uncertainties associated with the temperatures, the physical properties of the materials in the standard and the simulation techniques are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over a decade ago, nanotechnologists began research on applications of nanomaterials for medicine. This research has revealed a wide range of different challenges, as well as many opportunities. Some of these challenges are strongly related to informatics issues, dealing, for instance, with the management and integration of heterogeneous information, defining nomenclatures, taxonomies and classifications for various types of nanomaterials, and research on new modeling and simulation techniques for nanoparticles. Nanoinformatics has recently emerged in the USA and Europe to address these issues. In this paper, we present a review of nanoinformatics, describing its origins, the problems it addresses, areas of interest, and examples of current research initiatives and informatics resources. We suggest that nanoinformatics could accelerate research and development in nanomedicine, as has occurred in the past in other fields. For instance, biomedical informatics served as a fundamental catalyst for the Human Genome Project, and other genomic and ?omics projects, as well as the translational efforts that link resulting molecular-level research to clinical problems and findings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a dominance intensity measuring method to derive a ranking of alternatives to deal with incomplete information in multi-criteria decision-making problems on the basis of multi-attribute utility theory (MAUT) and fuzzy sets theory. We consider the situation where there is imprecision concerning decision-makers’ preferences, and imprecise weights are represented by trapezoidal fuzzy weights.The proposed method is based on the dominance values between pairs of alternatives. These values can be computed by linear programming, as an additive multi-attribute utility model is used to rate the alternatives. Dominance values are then transformed into dominance intensity measures, used to rank the alternatives under consideration. Distances between fuzzy numbers based on the generalization of the left and right fuzzy numbers are utilized to account for fuzzy weights. An example concerning the selection of intervention strategies to restore an aquatic ecosystem contaminated by radionuclides illustrates the approach. Monte Carlo simulation techniques have been used to show that the proposed method performs well for different imprecision levels in terms of a hit ratio and a rank-order correlation measure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dominance measuring methods are a new approach to deal with complex decision-making problems with imprecise information. These methods are based on the computation of pairwise dominance values and exploit the information in the dominance matrix in dirent ways to derive measures of dominance intensity and rank the alternatives under consideration. In this paper we propose a new dominance measuring method to deal with ordinal information about decision-maker preferences in both weights and component utilities. It takes advantage of the centroid of the polytope delimited by ordinal information and builds triangular fuzzy numbers whose distances to the crisp value 0 constitute the basis for the de?nition of a dominance intensity measure. Monte Carlo simulation techniques have been used to compare the performance of this method with other existing approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dentro de los materiales estructurales, el magnesio y sus aleaciones están siendo el foco de una de profunda investigación. Esta investigación está dirigida a comprender la relación existente entre la microestructura de las aleaciones de Mg y su comportamiento mecánico. El objetivo es optimizar las aleaciones actuales de magnesio a partir de su microestructura y diseñar nuevas aleaciones. Sin embargo, el efecto de los factores microestructurales (como la forma, el tamaño, la orientación de los precipitados y la morfología de los granos) en el comportamiento mecánico de estas aleaciones está todavía por descubrir. Para conocer mejor de la relación entre la microestructura y el comportamiento mecánico, es necesaria la combinación de técnicas avanzadas de caracterización experimental como de simulación numérica, a diferentes longitudes de escala. En lo que respecta a las técnicas de simulación numérica, la homogeneización policristalina es una herramienta muy útil para predecir la respuesta macroscópica a partir de la microestructura de un policristal (caracterizada por el tamaño, la forma y la distribución de orientaciones de los granos) y el comportamiento del monocristal. La descripción de la microestructura se lleva a cabo mediante modernas técnicas de caracterización (difracción de rayos X, difracción de electrones retrodispersados, así como con microscopia óptica y electrónica). Sin embargo, el comportamiento del cristal sigue siendo difícil de medir, especialmente en aleaciones de Mg, donde es muy complicado conocer el valor de los parámetros que controlan el comportamiento mecánico de los diferentes modos de deslizamiento y maclado. En la presente tesis se ha desarrollado una estrategia de homogeneización computacional para predecir el comportamiento de aleaciones de magnesio. El comportamiento de los policristales ha sido obtenido mediante la simulación por elementos finitos de un volumen representativo (RVE) de la microestructura, considerando la distribución real de formas y orientaciones de los granos. El comportamiento del cristal se ha simulado mediante un modelo de plasticidad cristalina que tiene en cuenta los diferentes mecanismos físicos de deformación, como el deslizamiento y el maclado. Finalmente, la obtención de los parámetros que controlan el comportamiento del cristal (tensiones críticas resueltas (CRSS) así como las tasas de endurecimiento para todos los modos de maclado y deslizamiento) se ha resuelto mediante la implementación de una metodología de optimización inversa, una de las principales aportaciones originales de este trabajo. La metodología inversa pretende, por medio del algoritmo de optimización de Levenberg-Marquardt, obtener el conjunto de parámetros que definen el comportamiento del monocristal y que mejor ajustan a un conjunto de ensayos macroscópicos independientes. Además de la implementación de la técnica, se han estudiado tanto la objetividad del metodología como la unicidad de la solución en función de la información experimental. La estrategia de optimización inversa se usó inicialmente para obtener el comportamiento cristalino de la aleación AZ31 de Mg, obtenida por laminado. Esta aleación tiene una marcada textura basal y una gran anisotropía plástica. El comportamiento de cada grano incluyó cuatro mecanismos de deformación diferentes: deslizamiento en los planos basal, prismático, piramidal hc+ai, junto con el maclado en tracción. La validez de los parámetros resultantes se validó mediante la capacidad del modelo policristalino para predecir ensayos macroscópicos independientes en diferentes direcciones. En segundo lugar se estudió mediante la misma estrategia, la influencia del contenido de Neodimio (Nd) en las propiedades de una aleación de Mg-Mn-Nd, obtenida por extrusión. Se encontró que la adición de Nd produce una progresiva isotropización del comportamiento macroscópico. El modelo mostró que este incremento de la isotropía macroscópica era debido tanto a la aleatoriedad de la textura inicial como al incremento de la isotropía del comportamiento del cristal, con valores similares de las CRSSs de los diferentes modos de deformación. Finalmente, el modelo se empleó para analizar el efecto de la temperatura en el comportamiento del cristal de la aleación de Mg-Mn-Nd. La introducción en el modelo de los efectos non-Schmid sobre el modo de deslizamiento piramidal hc+ai permitió capturar el comportamiento mecánico a temperaturas superiores a 150_C. Esta es la primera vez, de acuerdo con el conocimiento del autor, que los efectos non-Schmid han sido observados en una aleación de Magnesio. The study of Magnesium and its alloys is a hot research topic in structural materials. In particular, special attention is being paid in understanding the relationship between microstructure and mechanical behavior in order to optimize the current alloy microstructures and guide the design of new alloys. However, the particular effect of several microstructural factors (precipitate shape, size and orientation, grain morphology distribution, etc.) in the mechanical performance of a Mg alloy is still under study. The combination of advanced characterization techniques and modeling at several length scales is necessary to improve the understanding of the relation microstructure and mechanical behavior. Respect to the simulation techniques, polycrystalline homogenization is a very useful tool to predict the macroscopic response from polycrystalline microstructure (grain size, shape and orientation distributions) and crystal behavior. The microstructure description is fully covered with modern characterization techniques (X-ray diffraction, EBSD, optical and electronic microscopy). However, the mechanical behaviour of single crystals is not well-known, especially in Mg alloys where the correct parameterization of the mechanical behavior of the different slip/twin modes is a very difficult task. A computational homogenization framework for predicting the behavior of Magnesium alloys has been developed in this thesis. The polycrystalline behavior was obtained by means of the finite element simulation of a representative volume element (RVE) of the microstructure including the actual grain shape and orientation distributions. The crystal behavior for the grains was accounted for a crystal plasticity model which took into account the physical deformation mechanisms, e.g. slip and twinning. Finally, the problem of the parametrization of the crystal behavior (critical resolved shear stresses (CRSS) and strain hardening rates of all the slip and twinning modes) was obtained by the development of an inverse optimization methodology, one of the main original contributions of this thesis. The inverse methodology aims at finding, by means of the Levenberg-Marquardt optimization algorithm, the set of parameters defining crystal behavior that best fit a set of independent macroscopic tests. The objectivity of the method and the uniqueness of solution as function of the input information has been numerically studied. The inverse optimization strategy was first used to obtain the crystal behavior of a rolled polycrystalline AZ31 Mg alloy that showed a marked basal texture and a strong plastic anisotropy. Four different deformation mechanisms: basal, prismatic and pyramidal hc+ai slip, together with tensile twinning were included to characterize the single crystal behavior. The validity of the resulting parameters was proved by the ability of the polycrystalline model to predict independent macroscopic tests on different directions. Secondly, the influence of Neodymium (Nd) content on an extruded polycrystalline Mg-Mn-Nd alloy was studied using the same homogenization and optimization framework. The effect of Nd addition was a progressive isotropization of the macroscopic behavior. The model showed that this increase in the macroscopic isotropy was due to a randomization of the initial texture and also to an increase of the crystal behavior isotropy (similar values of the CRSSs of the different modes). Finally, the model was used to analyze the effect of temperature on the crystal behaviour of a Mg-Mn-Nd alloy. The introduction in the model of non-Schmid effects on the pyramidal hc+ai slip allowed to capture the inverse strength differential that appeared, between the tension and compression, above 150_C. This is the first time, to the author's knowledge, that non-Schmid effects have been reported for Mg alloys.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper reports on a collaborative effort between the Swiss Federal Nuclear Safety Inspectorate (ENSI) and their consultants Principia and Stangenberg. As part of the IMPACT III project, reduced scale impact tests of reinforced concrete structures were carried out. The simulation of test X3 is presented here and the numerical results are compared with those obtained in the test, carried out in August 2013. The general object is to improve the safety of nuclear facilities and, more specifically, to demonstrate the capabilities of current simulation techniques to reproduce the behaviour of a reinforced concrete structure impacted by a soft missile. The missile is a steel tube with a mass of 50 kg and travelling at 140 m/s. The target is a 250 mm thick, 2,1 m by 2,1 m reinforced concrete wall, held in a stiff supporting frame. The reinforcement includes both longitudinal and transverse rebars. Calculations were carried out before and after the test with Abaqus (Principia) and SOFiSTiK (Stangenberg). In the Abaqus simulation the concrete is modelled using solid elements and a damaged plasticity formulation, the rebars with embedded beam elements, and the missile with shell elements. In SOFiSTiK the target is modelled with non-linear, layered shell elements for the reinforcement on both sides; non-linear shear deformations of shell/plate elements are approximately included. The results generally indicate a good agreement between calculations and measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente trabajo tiene como objetivo general el análisis de las técnicas de diseño y optimización de redes topográficas, observadas mediante topografía convencional (no satelital) el desarrollo e implementación de un sistema informático capaz de ayudar a la definición de la geometría más fiable y precisa, en función de la orografía del terreno donde se tenga que ubicar. En primer lugar se realizará un estudio de la metodología del ajuste mediante mínimos cuadrados y la propagación de varianzas, para posteriormente analizar su dependencia de la geometría que adopte la red. Será imprescindible determinar la independencia de la matriz de redundancia (R) de las observaciones y su total dependencia de la geometría, así como la influencia de su diagonal principal (rii), números de redundancia, para garantizar la máxima fiabilidad interna de la misma. También se analizará el comportamiento de los números de redundancia (rii) en el diseño de una red topográfica, la variación de dichos valores en función de la geometría, analizando su independencia respecto de las observaciones así como los diferentes niveles de diseño en función de los parámetros y datos conocidos. Ha de señalarse que la optimización de la red, con arreglo a los criterios expuestos, está sujeta a los condicionantes que impone la necesidad de que los vértices sean accesibles, y además sean visibles entre sí, aquellos relacionados por observaciones, situaciones que dependen esencialmente del relieve del terreno y de los obstáculos naturales o artificiales que puedan existir. Esto implica la necesidad de incluir en el análisis y en el diseño, cuando menos de un modelo digital del terreno (MDT), aunque lo más útil sería la inclusión en el estudio del modelo digital de superficie (MDS), pero esta opción no siempre será posible. Aunque el tratamiento del diseño esté basado en un sistema bidimensional se estudiará la posibilidad de incorporar un modelo digital de superficie (MDS); esto permitirá a la hora de diseñar el emplazamiento de los vértices de la red la viabilidad de las observaciones en función de la orografía y los elementos, tanto naturales como artificiales, que sobre ella estén ubicados. Este sistema proporcionaría, en un principio, un diseño óptimo de una red constreñida, atendiendo a la fiabilidad interna y a la precisión final de sus vértices, teniendo en cuenta la orografía, lo que equivaldría a resolver un planteamiento de diseño en dos dimensiones y media1; siempre y cuando se dispusiera de un modelo digital de superficie o del terreno. Dado que la disponibilidad de obtener de manera libre el MDS de las zonas de interés del proyecto, hoy en día es costoso2, se planteará la posibilidad de conjuntar, para el estudio del diseño de la red, de un modelo digital del terreno. Las actividades a desarrollar en el trabajo de esta tesis se describen en esta memoria y se enmarcan dentro de la investigación para la que se plantean los siguientes objetivos globales: 1. Establecer un modelo matemático del proceso de observación de una red topográfica, atendiendo a todos los factores que intervienen en el mismo y a su influencia sobre las estimaciones de las incógnitas que se obtienen como resultado del ajuste de las observaciones. 2. Desarrollar un sistema que permita optimizar una red topográfica en sus resultados, aplicando técnicas de diseño y simulación sobre el modelo anterior. 3. Presentar una formulación explícita y rigurosa de los parámetros que valoran la fiabilidad de una red topográfica y de sus relaciones con el diseño de la misma. El logro de este objetivo se basa, además de en la búsqueda y revisión de las fuentes, en una intensa labor de unificación de notaciones y de construcción de pasos intermedios en los desarrollos matemáticos. 4. Elaborar una visión conjunta de la influencia del diseño de una red, en los seis siguientes factores (precisiones a posteriori, fiabilidad de las observaciones, naturaleza y viabilidad de las mismas, instrumental y metodología de estacionamiento) como criterios de optimización, con la finalidad de enmarcar el tema concreto que aquí se aborda. 5. Elaborar y programar los algoritmos necesarios para poder desarrollar una aplicación que sea capaz de contemplar las variables planteadas en el apartado anterior en el problema del diseño y simulación de redes topográficas, contemplando el modelo digital de superficie. Podrían considerarse como objetivos secundarios, los siguientes apartados: Desarrollar los algoritmos necesarios para interrelacionar el modelo digital del terreno con los propios del diseño. Implementar en la aplicación informática la posibilidad de variación, por parte del usuario, de los criterios de cobertura de los parámetros (distribución normal o t de Student), así como los grados de fiabilidad de los mismos ABSTRACT The overall purpose of this work is the analysis of the techniques of design and optimization for geodetic networks, measured with conventional survey methods (not satellite), the development and implementation of a computational system capable to help on the definition of the most liable and accurate geometry, depending on the land orography where the network has to be located. First of all, a study of the methodology by least squares adjustment and propagation of variances will be held; then, subsequently, analyze its dependency of the geometry that the network will take. It will be essential to determine the independency of redundancy matrix (R) from the observations and its absolute dependency from the network geometry, as well as the influence of the diagonal terms of the R matrix (rii), redundancy numbers, in order to ensure maximum re liability of the network. It will also be analyzed first the behavior of redundancy numbers (rii) in surveying network design, then the variation of these values depending on the geometry with the analysis of its independency from the observations, and finally the different design levels depending on parameters and known data. It should be stated that network optimization, according to exposed criteria, is subject to the accessibility of the network points. In addition, common visibility among network points, which of them are connected with observations, has to be considered. All these situations depends essentially on the terrain relief and the natural or artificial obstacles that should exist. Therefore, it is necessary to include, at least, a digital terrain model (DTM), and better a digital surface model (DSM), not always available. Although design treatment is based on a bidimensional system, the possibility of incorporating a digital surface model (DSM) will be studied; this will allow evaluating the observations feasibility based on the terrain and the elements, both natural and artificial, which are located on it, when selecting network point locations. This system would provide, at first, an optimal design of a constrained network, considering both the internal reliability and the accuracy of its points (including the relief). This approach would amount to solving a “two and a half dimensional”3 design, if a digital surface model is available. As the availability of free DSM4 of the areas of interest of the project today is expensive, the possibility of combining a digital terrain model will arise. The activities to be developed on this PhD thesis are described in this document and are part of the research for which the following overall objectives are posed: 1. To establish a mathematical model for the process of observation of a survey network, considering all the factors involved and its influence on the estimates of the unknowns that are obtained as a result of the observations adjustment. 2. To develop a system to optimize a survey network results, applying design and simulation techniques on the previous model. 3. To present an explicit and rigorous formulation of parameters which assess the reliability of a survey network and its relations with the design. The achievement of this objective is based, besides on the search and review of sources, in an intense work of unification of notation and construction of intermediate steps in the mathematical developments. 4. To develop an overview of the influence on the network design of six major factors (posterior accuracy, observations reliability, viability of observations, instruments and station methodology) as optimization criteria, in order to define the subject approached on this document. 5. To elaborate and program the algorithms needed to develop an application software capable of considering the variables proposed in the previous section, on the problem of design and simulation of surveying networks, considering the digital surface model. It could be considered as secondary objectives, the following paragraphs: To develop the necessary algorithms to interrelate the digital terrain model with the design ones. To implement in the software application the possibility of variation of the coverage criteria parameters (normal distribution or Student t test) and therefore its degree of reliability.