17 resultados para post-Newtonian approximation to general relativity
em Universidad Politécnica de Madrid
Resumo:
For centuries, earth has been used as a construction material. Nevertheless, the normative in this matter is very scattered, and the most developed countries, to carry out a construction with this material implies a variety of technical and legal problems. In this paper we review, in an international level, the normative panorama about earth constructions. It analyzes ninety one standards and regulations of countries all around the five continents. These standards represent the state of art that normalizes the earth as a construction material. In this research we analyze the international standards to earth construction, focusing on durability test (spray and drip erosion tests). It analyzes the differences between methods of test. Also we show all results about these tests in two types of compressed earth block.
Resumo:
A quasi-cylindrical approximation is used to analyse the axisymmetric swirling flow of a liquid with a hollow air core in the chamber of a pressure swirl atomizer. The liquid is injected into the chamber with an azimuthal velocity component through a number of slots at the periphery of one end of the chamber, and flows out as an anular sheet through a central orifice at the other end, following a conical convergence of the chamber wall. An effective inlet condition is used to model the effects of the slots and the boundary layer that develops at the nearby endwall of the chamber. An analysis is presented of the structure of the liquid sheet at the end of the exit orifice, where the flow becomes critical in the sense that upstream propagation of long-wave perturbations ceases to be possible. This nalysis leads to a boundary condition at the end of the orifice that is an extension of the condition of maximum flux used with irrotational models of the flow. As is well known, the radial pressure gradient induced by the swirling flow in the bulk of the chamber causes the overpressure that drives the liquid towards the exit orifice, and also leads to Ekman pumping in the boundary layers of reduced azimuthal velocity at the convergent wall of the chamber and at the wall opposite to the exit orifice. The numerical results confirm the important role played by the boundary layers. They make the thickness of the liquid sheet at the end of the orifice larger than predicted by rrotational models, and at the same time tend to decrease the overpressure required to pass a given flow rate through the chamber, because the large axial velocity in the boundary layers takes care of part of the flow rate. The thickness of the boundary layers increases when the atomizer constant (the inverse of a swirl number, proportional to the flow rate scaled with the radius of the exit orifice and the circulation around the air core) decreases. A minimum value of this parameter is found below which the layer of reduced azimuthal velocity around the air core prevents the pressure from increasing and steadily driving the flow through the exit orifice. The effects of other parameters not accounted for by irrotational models are also analysed in terms of their influence on the boundary layers.
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
This thesis contributes to the analysis and design of printed reflectarray antennas. The main part of the work is focused on the analysis of dual offset antennas comprising two reflectarray surfaces, one of them acts as sub-reflector and the second one acts as mainreflector. These configurations introduce additional complexity in several aspects respect to conventional dual offset reflectors, however they present a lot of degrees of freedom that can be used to improve the electrical performance of the antenna. The thesis is organized in four parts: the development of an analysis technique for dualreflectarray antennas, a preliminary validation of such methodology using equivalent reflector systems as reference antennas, a more rigorous validation of the software tool by manufacturing and testing a dual-reflectarray antenna demonstrator and the practical design of dual-reflectarray systems for some applications that show the potential of these kind of configurations to scan the beam and to generate contoured beams. In the first part, a general tool has been implemented to analyze high gain antennas which are constructed of two flat reflectarray structures. The classic reflectarray analysis based on MoM under local periodicity assumption is used for both sub and main reflectarrays, taking into account the incident angle on each reflectarray element. The incident field on the main reflectarray is computed taking into account the field radiated by all the elements on the sub-reflectarray.. Two approaches have been developed, one which employs a simple approximation to reduce the computer run time, and the other which does not, but offers in many cases, improved accuracy. The approximation is based on computing the reflected field on each element on the main reflectarray only once for all the fields radiated by the sub-reflectarray elements, assuming that the response will be the same because the only difference is a small variation on the angle of incidence. This approximation is very accurate when the reflectarray elements on the main reflectarray show a relatively small sensitivity to the angle of incidence. An extension of the analysis technique has been implemented to study dual-reflectarray antennas comprising a main reflectarray printed on a parabolic surface, or in general in a curved surface. In many applications of dual-reflectarray configurations, the reflectarray elements are in the near field of the feed-horn. To consider the near field radiated by the horn, the incident field on each reflectarray element is computed using a spherical mode expansion. In this region, the angles of incidence are moderately wide, and they are considered in the analysis of the reflectarray to better calculate the actual incident field on the sub-reflectarray elements. This technique increases the accuracy for the prediction of co- and cross-polar patterns and antenna gain respect to the case of using ideal feed models. In the second part, as a preliminary validation, the proposed analysis method has been used to design a dual-reflectarray antenna that emulates previous dual-reflector antennas in Ku and W-bands including a reflectarray as subreflector. The results for the dualreflectarray antenna compare very well with those of the parabolic reflector and reflectarray subreflector; radiation patterns, antenna gain and efficiency are practically the same when the main parabolic reflector is substituted by a flat reflectarray. The results show that the gain is only reduced by a few tenths of a dB as a result of the ohmic losses in the reflectarray. The phase adjustment on two surfaces provided by the dual-reflectarray configuration can be used to improve the antenna performance in some applications requiring multiple beams, beam scanning or shaped beams. Third, a very challenging dual-reflectarray antenna demonstrator has been designed, manufactured and tested for a more rigorous validation of the analysis technique presented. The proposed antenna configuration has the feed, the sub-reflectarray and the main-reflectarray in the near field one to each other, so that the conventional far field approximations are not suitable for the analysis of such antenna. This geometry is used as benchmarking for the proposed analysis tool in very stringent conditions. Some aspects of the proposed analysis technique that allow improving the accuracy of the analysis are also discussed. These improvements include a novel method to reduce the inherent cross polarization which is introduced mainly from grounded patch arrays. It has been checked that cross polarization in offset reflectarrays can be significantly reduced by properly adjusting the patch dimensions in the reflectarray in order to produce an overall cancellation of the cross-polarization. The dimensions of the patches are adjusted in order not only to provide the required phase-distribution to shape the beam, but also to exploit the crosses by zero of the cross-polarization components. The last part of the thesis deals with direct applications of the technique described. The technique presented is directly applicable to the design of contoured beam antennas for DBS applications, where the requirements of cross-polarisation are very stringent. The beam shaping is achieved by synthesithing the phase distribution on the main reflectarray while the sub-reflectarray emulates an equivalent hyperbolic subreflector. Dual-reflectarray antennas present also the ability to scan the beam over small angles about boresight. Two possible architectures for a Ku-band antenna are also described based on a dual planar reflectarray configuration that provides electronic beam scanning in a limited angular range. In the first architecture, the beam scanning is achieved by introducing a phase-control in the elements of the sub-reflectarray and the mainreflectarray is passive. A second alternative is also studied, in which the beam scanning is produced using 1-bit control on the main reflectarray, while a passive subreflectarray is designed to provide a large focal distance within a compact configuration. The system aims to develop a solution for bi-directional satellite links for emergency communications. In both proposed architectures, the objective is to provide a compact optics and simplicity to be folded and deployed.
Resumo:
The objective of this paper is to evaluate the behaviour of a controller designed using a parametric Eigenstructure Assignment method and to evaluate its suitability for use in flexible spacecraft. The challenge of this objective lies in obtaining a suitable controller that is specifically designated to alleviate the deflections and vibrations suffered by external appendages in flexible spacecraft while performing attitude manoeuvres. One of the main problems in these vehicles is the mechanical cross-coupling that exists between the rigid and flexible parts of the spacecraft. Spacecraft with fine attitude pointing requirements need precise control of the mechanical coupling to avoid undesired attitude misalignment. In designing an attitude controller, it is necessary to consider the possible vibration of the solar panels and how it may influence the performance of the rest of the vehicle. The nonlinear mathematical model of a flexible spacecraft is considered a close approximation to the real system. During the process of controller evaluation, the design process has also been taken into account as a factor in assessing the robustness of the system.
Resumo:
The main objective of this course, conducted by Jóvenes Nucleares (Spanish Young Generation in Nuclear, JJNN), a non-profit organization that depends on the Spanish Nuclear Society (SNE) is to pass on basic knowledge about Science and Nuclear Technology to the general public, mostly students and introduce them to its most relevant points. The purposes of this course are to provide general information, to answer the most common questions about Nuclear Energy and to motivate the young students to start a career in nuclear. Therefore, it is directed mainly to high school and university students, but also to general people that wants to learn about the key issues of such an important matter in our society. Anybody could attend the course, as no specific scientific education is required. The course is done at least once a year, during the Annual Meeting of the Spanish Nuclear Society, which takes place in a different Spanish city each time. The course is done also to whichever university or institution that asks for it to JJNN, with the only limit of the presenter´s availability. The course is divided into the following chapters: Physical nuclear and radiation principles, Nuclear power plants, Nuclear safety, Nuclear fuel, Radioactive waste, Decommission of nuclear facilities, Future nuclear power plants, Other uses of nuclear technology, Nuclear energy, climate change and sustainable development. The course is divided into 15 minutes lessons on the above topics, imparted by young professionals, experts in the field that belongs either to the Spanish Young Generation in Nuclear, either to companies and institutions related with nuclear energy. At the end of the course, a 200 pages book with the contents of the course is handed to every member of the audience. This book is also distributed in other course editions at high schools and universities in order to promote the scientific dissemination of the Nuclear Technology. As an extra motivation, JJNN delivers a course certificate to the assistants. At the end of the last edition course, in Santiago de Compostela, the assistants were asked to provide a feedback about it. Some really interesting lessons were learned, that will be very useful to improve next editions of the course. As a general conclusion of the courses it can be said that many of the students that have assisted to the course have increased their motivation in the nuclear field, and hopefully it will help the young talents to choose the nuclear field to develop their careers
Resumo:
Tradicionalmente, la fabricación de materiales compuestos de altas prestaciones se lleva a cabo en autoclave mediante la consolidación de preimpregnados a través de la aplicación simultánea de altas presiones y temperatura. Las elevadas presiones empleadas en autoclave reducen la porosidad de los componentes garantizando unas buenas propiedades mecánicas. Sin embargo, este sistema de fabricación conlleva tiempos de producción largos y grandes inversiones en equipamiento lo que restringe su aplicación a otros sectores alejados del sector aeronáutico. Este hecho ha generado una creciente demanda de sistemas de fabricación alternativos al autoclave. Aunque estos sistemas son capaces de reducir los tiempos de producción y el gasto energético, por lo general, dan lugar a materiales con menores prestaciones mecánicas debido a que se reduce la compactación del material al aplicar presiones mas bajas y, por tanto, la fracción volumétrica de fibras, y disminuye el control de la porosidad durante el proceso. Los modelos numéricos existentes permiten conocer los fundamentos de los mecanismos de crecimiento de poros durante la fabricación de materiales compuestos de matriz polimérica mediante autoclave. Dichos modelos analizan el comportamiento de pequeños poros esféricos embebidos en una resina viscosa. Su validez no ha sido probada, sin embargo, para la morfología típica observada en materiales compuestos fabricados fuera de autoclave, consistente en poros cilíndricos y alargados embebidos en resina y rodeados de fibras continuas. Por otro lado, aunque existe una clara evidencia experimental del efecto pernicioso de la porosidad en las prestaciones mecánicas de los materiales compuestos, no existe información detallada sobre la influencia de las condiciones de procesado en la forma, fracción volumétrica y distribución espacial de los poros en los materiales compuestos. Las técnicas de análisis convencionales para la caracterización microestructural de los materiales compuestos proporcionan información en dos dimensiones (2D) (microscopía óptica y electrónica, radiografía de rayos X, ultrasonidos, emisión acústica) y sólo algunas son adecuadas para el análisis de la porosidad. En esta tesis, se ha analizado el efecto de ciclo de curado en el desarrollo de los poros durante la consolidación de preimpregnados Hexply AS4/8552 a bajas presiones mediante moldeo por compresión, en paneles unidireccionales y multiaxiales utilizando tres ciclos de curado diferentes. Dichos ciclos fueron cuidadosamente diseñados de acuerdo a la caracterización térmica y reológica de los preimpregnados. La fracción volumétrica de poros, su forma y distribución espacial se analizaron en detalle mediante tomografía de rayos X. Esta técnica no destructiva ha demostrado su capacidad para analizar la microestructura de materiales compuestos. Se observó, que la porosidad depende en gran medida de la evolución de la viscosidad dinámica a lo largo del ciclo y que la mayoría de la porosidad inicial procedía del aire atrapado durante el apilamiento de las láminas de preimpregnado. En el caso de los laminados multiaxiales, la porosidad también se vio afectada por la secuencia de apilamiento. En general, los poros tenían forma cilíndrica y se estaban orientados en la dirección de las fibras. Además, la proyección de la población de poros a lo largo de la dirección de la fibra reveló la existencia de una estructura celular de un diámetro aproximado de 1 mm. Las paredes de las celdas correspondían con regiones con mayor densidad de fibra mientras que los poros se concentraban en el interior de las celdas. Esta distribución de la porosidad es el resultado de una consolidación no homogenea. Toda esta información es crítica a la hora de optimizar las condiciones de procesado y proporcionar datos de partida para desarrollar herramientas de simulación de los procesos de fabricación de materiales compuestos fuera de autoclave. Adicionalmente, se determinaron ciertas propiedades mecánicas dependientes de la matriz termoestable con objeto de establecer la relación entre condiciones de procesado y las prestaciones mecánicas. En el caso de los laminados unidireccionales, la resistencia interlaminar depende de la porosidad para fracciones volumétricas de poros superiores 1%. Las mismas tendencias se observaron en el caso de GIIc mientras GIc no se vio afectada por la porosidad. En el caso de los laminados multiaxiales se evaluó la influencia de la porosidad en la resistencia a compresión, la resistencia a impacto a baja velocidad y la resistencia a copresión después de impacto. La resistencia a compresión se redujo con el contenido en poros, pero éste no influyó significativamente en la resistencia a compresión despues de impacto ya que quedó enmascarada por otros factores como la secuencia de apilamiento o la magnitud del daño generado tras el impacto. Finalmente, el efecto de las condiciones de fabricación en el proceso de compactación mediante moldeo por compresión en laminados unidireccionales fue simulado mediante el método de los elementos finitos en una primera aproximación para simular la fabricación de materiales compuestos fuera de autoclave. Los parámetros del modelo se obtuvieron mediante experimentos térmicos y reológicos del preimpregnado Hexply AS4/8552. Los resultados obtenidos en la predicción de la reducción de espesor durante el proceso de consolidación concordaron razonablemente con los resultados experimentales. Manufacturing of high performance polymer-matrix composites is normally carried out by means of autoclave using prepreg tapes stacked and consolidated under the simultaneous application of pressure and temperature. High autoclave pressures reduce the porosity in the laminate and ensure excellent mechanical properties. However, this manufacturing route is expensive in terms of capital investment and processing time, hindering its application in many industrial sectors. This fact has driven the demand of alternative out-of-autoclave processing routes. These techniques claim to produce composite parts faster and at lower cost but the mechanical performance is also reduced due to the lower fiber content and to the higher porosity. Corrient numerical models are able to simulate the mechanisms of void growth in polymer-matrix composites processed in autoclave. However these models are restricted to small spherical voids surrounded by a viscous resin. Their validity is not proved for long cylindrical voids in a viscous matrix surrounded by aligned fibers, the standard morphology observed in out-of-autoclave composites. In addition, there is an experimental evidence of the detrimental effect of voids on the mechanical performance of composites but, there is detailed information regarding the influence of curing conditions on the actual volume fraction, shape and spatial distribution of voids within the laminate. The standard techniques of microstructural characterization of composites (optical or electron microscopy, X-ray radiography, ultrasonics) provide information in two dimensions and are not always suitable to determine the porosity or void population. Moreover, they can not provide 3D information. The effect of curing cycle on the development of voids during consolidation of AS4/8552 prepregs at low pressure by compression molding was studied in unidirectional and multiaxial panels. They were manufactured using three different curing cycles carefully designed following the rheological and thermal analysis of the raw prepregs. The void volume fraction, shape and spatial distribution were analyzed in detail by means of X-ray computed microtomography, which has demonstrated its potential for analyzing the microstructural features of composites. It was demonstrated that the final void volume fraction depended on the evolution of the dynamic viscosity throughout the cycle. Most of the initial voids were the result of air entrapment and wrinkles created during lay-up. Differences in the final void volume fraction depended on the processing conditions for unidirectional and multiaxial panels. Voids were rod-like shaped and were oriented parallel to the fibers and concentrated in channels along the fiber orientation. X-ray computer tomography analysis of voids along the fiber direction showed a cellular structure with an approximate cell diameter of 1 mm. The cell walls were fiber-rich regions and porosity was localized at the center of the cells. This porosity distribution within the laminate was the result of inhomogeneous consolidation. This information is critical to optimize processing parameters and to provide inputs for virtual testing and virtual processing tools. In addition, the matrix-controlled mechanical properties of the panels were measured in order to establish the relationship between processing conditions and mechanical performance. The interlaminar shear strength (ILSS) and the interlaminar toughness (GIc and GIIc) were selected to evaluate the effect of porosity on the mechanical performance of unidirectional panels. The ILSS was strongly affected by the porosity when the void contents was higher than 1%. The same trends were observed in the case of GIIc while GIc was insensitive to the void volume fraction. Additionally, the mechanical performance of multiaxial panels in compression, low velocity impact and compression after impact (CAI) was measured to address the effect of processing conditions. The compressive strength decreased with porosity and ply-clustering. However, the porosity did not influence the impact resistance and the coompression after impact strength because the effect of porosity was masked by other factors as the damage due to impact or the laminate lay-up. Finally, the effect of the processing conditions on the compaction behavior of unidirectional AS4/8552 panels manufactured by compression moulding was simulated using the finite element method, as a first approximation to more complex and accurate models for out-of autoclave curing and consolidation of composite laminates. The model parameters were obtained from rheological and thermo-mechanical experiments carried out in raw prepreg samples. The predictions of the thickness change during consolidation were in reasonable agreement with the experimental results.
Resumo:
Corrosion of reinforcing steel in concrete due to chloride ingress is one of the main causes of the deterioration of reinforced concrete structures. Structures most affected by such a corrosion are marine zone buildings and structures exposed to de-icing salts like highways and bridges. Such process is accompanied by an increase in volume of the corrosión products on the rebarsconcrete interface. Depending on the level of oxidation, iron can expand as much as six times its original volume. This increase in volume exerts tensile stresses in the surrounding concrete which result in cracking and spalling of the concrete cover if the concrete tensile strength is exceeded. The mechanism by which steel embedded in concrete corrodes in presence of chloride is the local breakdown of the passive layer formed in the highly alkaline condition of the concrete. It is assumed that corrosion initiates when a critical chloride content reaches the rebar surface. The mathematical formulation idealized the corrosion sequence as a two-stage process: an initiation stage, during which chloride ions penetrate to the reinforcing steel surface and depassivate it, and a propagation stage, in which active corrosion takes place until cracking of the concrete cover has occurred. The aim of this research is to develop computer tools to evaluate the duration of the service life of reinforced concrete structures, considering both the initiation and propagation periods. Such tools must offer a friendly interface to facilitate its use by the researchers even though their background is not in numerical simulation. For the evaluation of the initiation period different tools have been developed: Program TavProbabilidade: provides means to carry out a probability analysis of a chloride ingress model. Such a tool is necessary due to the lack of data and general uncertainties associated with the phenomenon of the chloride diffusion. It differs from the deterministic approach because it computes not just a chloride profile at a certain age, but a range of chloride profiles for each probability or occurrence. Program TavProbabilidade_Fiabilidade: carries out reliability analyses of the initiation period. It takes into account the critical value of the chloride concentration on the steel that causes breakdown of the passive layer and the beginning of the propagation stage. It differs from the deterministic analysis in that it does not predict if the corrosion is going to begin or not, but to quantifies the probability of corrosion initiation. Program TavDif_1D: was created to do a one dimension deterministic analysis of the chloride diffusion process by the finite element method (FEM) which numerically solves Fick’second Law. Despite of the different FEM solver already developed in one dimension, the decision to create a new code (TavDif_1D) was taken because of the need to have a solver with friendly interface for pre- and post-process according to the need of IETCC. An innovative tool was also developed with a systematic method devised to compare the ability of the different 1D models to predict the actual evolution of chloride ingress based on experimental measurements, and also to quantify the degree of agreement of the models with each others. For the evaluation of the entire service life of the structure: a computer program has been developed using finite elements method to do the coupling of both service life periods: initiation and propagation. The program for 2D (TavDif_2D) allows the complementary use of two external programs in a unique friendly interface: • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. This program (TavDif_2D) is responsible to decide in each time step when and where to start applying the boundary conditions of fracture mechanics module in function of the amount of chloride concentration and corrosion parameters (Icorr, etc). This program is also responsible to verify the presence and the degree of fracture in each element to send the Information of diffusion coefficient variation with the crack width. • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. The advantages of the FEM with the interface provided by the tool are: • the flexibility to input the data such as material property and boundary conditions as time dependent function. • the flexibility to predict the chloride concentration profile for different geometries. • the possibility to couple chloride diffusion (initiation stage) with chemical and mechanical behavior (propagation stage). The OOFEM code had to be modified to accept temperature, humidity and the time dependent values for the material properties, which is necessary to adequately describe the environmental variations. A 3-D simulation has been performed to simulate the behavior of the beam on both, action of the external load and the internal load caused by the corrosion products, using elements of imbedded fracture in order to plot the curve of the deflection of the central region of the beam versus the external load to compare with the experimental data.
Resumo:
We introduce a diffusion-based algorithm in which multiple agents cooperate to predict a common and global statevalue function by sharing local estimates and local gradient information among neighbors. Our algorithm is a fully distributed implementation of the gradient temporal difference with linear function approximation, to make it applicable to multiagent settings. Simulations illustrate the benefit of cooperation in learning, as made possible by the proposed algorithm.
Resumo:
Este artículo estudia la evolución de un modelo de vivienda prefabricada en madera, ejemplificada en la casita de verano que construye Konrad Wachsmann para Albert Einstein en 1929 en Caputh, cerca de Potsdam. El físico deseaba construirse un "lugar de descanso", eligiendo la construcción en madera por su facilidad y rapidez de montaje, adaptabilidad, calidez y para que armonizara mejor con el medio ambiente en el paraje donde se insertaba. Konrad Wachsmann, que trabajaba para la firma de viviendas prefabricadas en madera "Christoph&Unmack A.G." le presentará un modelo prefabricado moderno. Esta tipología, que había evolucionado desde los diseños iniciales "nórdico escandinavos", pasando por el "jugendstil", hasta introducir un nuevo lenguaje de líneas puras, cubierta plana, y grandes ventanales iniciado por Poelzig, será ligeramente modificada por Einstein, que finalmente adjudica el encargo. Ayudado por Einstein a trasladarse a EEUU, Konrad Wachsmann continuará allí la labor de investigación sobre vivienda prefabricada junto con Walter Gropius, que dará como resultado el "General Panel System" y sus conocidas "Packaged Houses". A HOUSE FOR EINSTEIN: KONRAD WACHSMANN AND THE EVOLUTION OF A PREFABRICATED WOODEN HOUSING MODEL FROM " CHRISTOPH & UNMACK A.G." TO "GENERAL PANEL SYSTEM". This article studies the evolution of a prefabricated wooden housing model, exemplified in the summer house built by Konrad Wachsmann for Albert Einstein in 1929, in Caputh, near Potsdam. The Physician wanted to build a "resting house", choosing a wood construction because of its easy and fast assembly, adaptability, warmth and harmony with the environment where it would be inserted. Konrad Wachsmann, who worked for the wooden prefabricated houses firm "Christoph & Unmack AG", proposed Einstein a modern prefabricated wood model. This typology, which had evolved from the initial "Nordic Scandinavian" and "Jugendstil" designs to a new modern language initiated by Poelzig (with clean lines, flat roof, and large windows) will be slightly modified by Einstein, that finally hired the construction of the house. Aided by Einstein to move to USA, Konrad Wachsmann continued there his research work about prefabricated houses with Walter Gropius, giving as a results the "General Panel System" and the popular "Packaged Houses".
Resumo:
Swarm robotics is a field of multi-robotics in which large number of robots are coordinated in a distributed and decentralised way. It is based on the use of local rules, and simple robots compared to the complexity of the task to achieve, and inspired by social insects. Large number of simple robots can perform complex tasks in a more efficient way than a single robot, giving robustness and flexibility to the group. In this article, an overview of swarm robotics is given, describing its main properties and characteristics and comparing it to general multi-robotic systems. A review of different research works and experimental results, together with a discussion of the future swarm robotics in real world applications completes this work.
Resumo:
La relación entre la ingeniería y la medicina cada vez se está haciendo más estrecha, y debido a esto se ha creado una nueva disciplina, la bioingeniería, ámbito en el que se centra el proyecto. Este ámbito cobra gran interés debido al rápido desarrollo de nuevas tecnologías que en particular permiten, facilitan y mejoran la obtención de diagnósticos médicos respecto de los métodos tradicionales. Dentro de la bioingeniería, el campo que está teniendo mayor desarrollo es el de la imagen médica, gracias al cual se pueden obtener imágenes del interior del cuerpo humano con métodos no invasivos y sin necesidad de recurrir a la cirugía. Mediante métodos como la resonancia magnética, rayos X, medicina nuclear o ultrasonidos, se pueden obtener imágenes del cuerpo humano para realizar diagnósticos. Para que esas imágenes puedan ser utilizadas con ese fin hay que realizar un correcto tratamiento de éstas mediante técnicas de procesado digital. En ése ámbito del procesado digital de las imágenes médicas es en el que se ha realizado este proyecto. Gracias al desarrollo del tratamiento digital de imágenes con métodos de extracción de información, mejora de la visualización o resaltado de rasgos de interés de las imágenes, se puede facilitar y mejorar el diagnóstico de los especialistas. Por todo esto en una época en la que se quieren automatizar todos los procesos para mejorar la eficacia del trabajo realizado, el automatizar el procesado de las imágenes para extraer información con mayor facilidad, es muy útil. Actualmente una de las herramientas más potentes en el tratamiento de imágenes médicas es Matlab, gracias a su toolbox de procesado de imágenes. Por ello se eligió este software para el desarrollo de la parte práctica de este proyecto, su potencia y versatilidad simplifican la implementación de algoritmos. Este proyecto se estructura en dos partes. En la primera se realiza una descripción general de las diferentes modalidades de obtención de imágenes médicas y se explican los diferentes usos de cada método, dependiendo del campo de aplicación. Posteriormente se hace una descripción de las técnicas más importantes de procesado de imagen digital que han sido utilizadas en el proyecto. En la segunda parte se desarrollan cuatro aplicaciones en Matlab para ejemplificar el desarrollo de algoritmos de procesado de imágenes médicas. Dichas implementaciones demuestran la aplicación y utilidad de los conceptos explicados anteriormente en la parte teórica, como la segmentación y operaciones de filtrado espacial de la imagen, así como otros conceptos específicos. Las aplicaciones ejemplo desarrolladas han sido: obtención del porcentaje de metástasis de un tejido, diagnóstico de las deformidades de la columna vertebral, obtención de la MTF de una cámara de rayos gamma y medida del área de un fibroadenoma de una ecografía de mama. Por último, para cada una de las aplicaciones se detallará su utilidad en el campo de la imagen médica, los resultados obtenidos y su implementación en una interfaz gráfica para facilitar su uso. ABSTRACT. The relationship between medicine and engineering is becoming closer than ever giving birth to a recently appeared science field: bioengineering. This project is focused on this subject. This recent field is becoming more and more important due to the fast development of new technologies that provide tools to improve disease diagnosis, with regard to traditional procedures. In bioengineering the fastest growing field is medical imaging, in which we can obtain images of the inside of the human body without need of surgery. Nowadays by means of the medical modalities of magnetic resonance, X ray, nuclear medicine or ultrasound, we can obtain images to make a more accurate diagnosis. For those images to be useful within the medical field, they should be processed properly with some digital image processing techniques. It is in this field of digital medical image processing where this project is developed. Thanks to the development of digital image processing providing methods for data collection, improved visualization or data highlighting, diagnosis can be eased and facilitated. In an age where automation of processes is much sought, automated digital image processing to ease data collection is extremely useful. One of the most powerful image processing tools is Matlab, together with its image processing toolbox. That is the reason why that software was chosen to develop the practical algorithms in this project. This final project is divided into two main parts. Firstly, the different modalities for obtaining medical images will be described. The different usages of each method according to the application will also be specified. Afterwards we will give a brief description of the most important image processing tools that have been used in the project. Secondly, four algorithms in Matlab are implemented, to provide practical examples of medical image processing algorithms. This implementation shows the usefulness of the concepts previously explained in the first part, such as: segmentation or spatial filtering. The particular applications examples that have been developed are: calculation of the metastasis percentage of a tissue, diagnosis of spinal deformity, approximation to the MTF of a gamma camera, and measurement of the area of a fibroadenoma in an ultrasound image. Finally, for each of the applications developed, we will detail its usefulness within the medical field, the results obtained, and its implementation in a graphical user interface to ensure ease of use.
Resumo:
El comportamiento post-rotura de los vidrios laminados es uno de los temas que están siendo investigados para explicar la capacidad de carga remanente tras la rotura de la primera lámina. En investigaciones previas se ha observado que en el caso de impacto humano en vidrios recocidos se llega a una capacidad hasta 3 veces superior, sin explicación clara del comportamiento estructural del conjunto. Para realizar un acercamiento a la resistencia a la rotura del vidrio laminado se ha planificado una campaña de ensayos de rotura con anillos concéntricos de grandes superficies en vidrio recocido, termoendurecido y templado, con dos series adicionales de vidrio recocido y termoendurecido con una capa de butiral adherida justo después del proceso de fabricación. Para realizar la comparación de las distribuciones de Weibull de las distintas tensiones de rotura se utiliza un proceso iterativo basado en la distribución real de tensiones obtenida con un modelo de elementos finitos ajustado con datos experimentales. Las comparaciones finales muestran un aumento apreciable de la resistencia (45%) en el caso de vidrios recocidos, y menor en el de los termoendurecidos (25%).The post-fracture behavior of the laminated glasses is one of the research topics that are being studied to explain the load capacity after the break of the first sheet. Previous experimental work have shown, that in case of human impact in annealed glasses, the capacity of bearing load it can be up to 3 times higher without clear explanation of the structural behavior of the plate. To make an approximation to the post-fracture resistance, a experimental program to test annealed, heat-tempered and toughened glass plates has been prepared. Two additional series of annealed and heattempered, with a layer of polyvinyl butyral adhered just after the manufacturing process, have also been incorporated. Coaxial Double Ring with large test surface areas Coaxial Double Ring with large test surface areas is the standard that has been followed. To make the comparison of Weibull's distributions of the different fracture stress, an iterative process based on the actual stress distribution obtained with a finite elements model updated with experimental results has been used. Final comparisons show a great stress improvement for the annealed glass plates (45 %), and a minor increment for the heat-tempered (25 %).
Resumo:
El rebase se define como el transporte de una cantidad importante de agua sobre la coronación de una estructura. Por tanto, es el fenómeno que, en general, determina la cota de coronación del dique dependiendo de la cantidad aceptable del mismo, a la vista de condicionantes funcionales y estructurales del dique. En general, la cantidad de rebase que puede tolerar un dique de abrigo desde el punto de vista de su integridad estructural es muy superior a la cantidad permisible desde el punto de vista de su funcionalidad. Por otro lado, el diseño de un dique con una probabilidad de rebase demasiado baja o nula conduciría a diseños incompatibles con consideraciones de otro tipo, como son las estéticas o las económicas. Existen distintas formas de estudiar el rebase producido por el oleaje sobre los espaldones de las obras marítimas. Las más habituales son los ensayos en modelo físico y las formulaciones empíricas o semi-empíricas. Las menos habituales son la instrumentación en prototipo, las redes neuronales y los modelos numéricos. Los ensayos en modelo físico son la herramienta más precisa y fiable para el estudio específico de cada caso, debido a la complejidad del proceso de rebase, con multitud de fenómenos físicos y parámetros involucrados. Los modelos físicos permiten conocer el comportamiento hidráulico y estructural del dique, identificando posibles fallos en el proyecto antes de su ejecución, evaluando diversas alternativas y todo esto con el consiguiente ahorro en costes de construcción mediante la aportación de mejoras al diseño inicial de la estructura. Sin embargo, presentan algunos inconvenientes derivados de los márgenes de error asociados a los ”efectos de escala y de modelo”. Las formulaciones empíricas o semi-empíricas presentan el inconveniente de que su uso está limitado por la aplicabilidad de las fórmulas, ya que éstas sólo son válidas para una casuística de condiciones ambientales y tipologías estructurales limitadas al rango de lo reproducido en los ensayos. El objetivo de la presente Tesis Doctoral es el contrate de las formulaciones desarrolladas por diferentes autores en materia de rebase en distintas tipologías de diques de abrigo. Para ello, se ha realizado en primer lugar la recopilación y el análisis de las formulaciones existentes para estimar la tasa de rebase sobre diques en talud y verticales. Posteriormente, se llevó a cabo el contraste de dichas formulaciones con los resultados obtenidos en una serie de ensayos realizados en el Centro de Estudios de Puertos y Costas. Para finalizar, se aplicó a los ensayos de diques en talud seleccionados la herramienta neuronal NN-OVERTOPPING2, desarrollada en el proyecto europeo de rebases CLASH (“Crest Level Assessment of Coastal Structures by Full Scale Monitoring, Neural Network Prediction and Hazard Analysis on Permissible Wave Overtopping”), contrastando de este modo la tasa de rebase obtenida en los ensayos con este otro método basado en la teoría de las redes neuronales. Posteriormente, se analizó la influencia del viento en el rebase. Para ello se han realizado una serie de ensayos en modelo físico a escala reducida, generando oleaje con y sin viento, sobre la sección vertical del Dique de Levante de Málaga. Finalmente, se presenta el análisis crítico del contraste de cada una de las formulaciones aplicadas a los ensayos seleccionados, que conduce a las conclusiones obtenidas en la presente Tesis Doctoral. Overtopping is defined as the volume of water surpassing the crest of a breakwater and reaching the sheltered area. This phenomenon determines the breakwater’s crest level, depending on the volume of water admissible at the rear because of the sheltered area’s functional and structural conditioning factors. The ways to assess overtopping processes range from those deemed to be most traditional, such as semi-empirical or empirical type equations and physical, reduced scale model tests, to others less usual such as the instrumentation of actual breakwaters (prototypes), artificial neural networks and numerical models. Determining overtopping in reduced scale physical model tests is simple but the values obtained are affected to a greater or lesser degree by the effects of a scale model-prototype such that it can only be considered as an approximation to what actually happens. Nevertheless, physical models are considered to be highly useful for estimating damage that may occur in the area sheltered by the breakwater. Therefore, although physical models present certain problems fundamentally deriving from scale effects, they are still the most accurate, reliable tool for the specific study of each case, especially when large sized models are adopted and wind is generated Empirical expressions obtained from laboratory tests have been developed for calculating the overtopping rate and, therefore, the formulas obtained obviously depend not only on environmental conditions – wave height, wave period and water level – but also on the model’s characteristics and are only applicable in a range of validity of the tests performed in each case. The purpose of this Thesis is to make a comparative analysis of methods for calculating overtopping rates developed by different authors for harbour breakwater overtopping. First, existing equations were compiled and analysed in order to estimate the overtopping rate on sloping and vertical breakwaters. These equations were then compared with the results obtained in a number of tests performed in the Centre for Port and Coastal Studies of the CEDEX. In addition, a neural network model developed in the European CLASH Project (“Crest Level Assessment of Coastal Structures by Full Scale Monitoring, Neural Network Prediction and Hazard Analysis on Permissible Wave Overtopping“) was also tested. Finally, the wind effects on overtopping are evaluated using tests performed with and without wind in the physical model of the Levante Breakwater (Málaga).
Resumo:
Esta tesis, Edén: relato, imagen y proyecto. El concepto de Paraíso terrenal como generador de arquitecturas se realiza con el objetivo de estudiar los vínculos entre la idea de Edén, o Paraíso y la arquitectura. Siempre trabajando desde los tres niveles de representación, relato, imagen y proyecto. En la aproximación al objeto de estudio, se procede a estudiar el relato en sí, y se hallan, en la forma misma del relato, unas implicaciones relacionadas con el mundo mitológico y arquetípico. Estos resultados iniciales son la detección de que cada una de las partes que forman el conjunto edénico, han sido previamente objetos de culto en religiones corte pagano o chamánico, desde la prehistoria. El agua, los árboles, los animales, la tierra y los surcos del huerto, todos ellos han sido objetos de veneración desde tiempos inmemoriales. Trazando la genealogía de estos objetos de culto se acude al análisis arquetípico, que relaciona estos objetos venerados con el inconsciente y con la manifestación espontanea de los mismos en la realidad. Estos estudios arrojan resultados con implicaciones espaciales, arquitectónicas y se concluye que más que un ideal o un lugar en concreto, en el mito o en la realidad, lo que definitivamente parece (y demuestra) ser, es una tipología arquitectónica relacionada en su estructura formal y teórica con la del jardín cerrado. La manifestación en imagen de estos resultados y la investigación misma, llevan a acudir a unas de las imágenes más primitivas del Jardín del Edén, y que de hecho son previas a la “invención” del Hortus Conclusus como tal. Estas son las representaciones mozárabes del Paraíso Terrenal como lugar cierto en la tierra, que aparecen en los mappaemundi incluidos en los Códices de los Beatos. En el estudio de los mismos se comprende la estructura formal y teórica de lo que son las arquitecturas paradisíacas. Su calado en la cultura occidental hace que estos documentos sirvan como exoesqueletos del proyecto paradisíaco. Además, por su variedad arrojan gran número de resultados de índole espacial. Los resultados arrojados por el estudio de las representaciones edénicas de los Beatos llevan la investigación a otro momento e imágenes de la historia de la arquitectura donde, por su radicalidad de planteamientos y tabula rasa con las arquitecturas previas son necesarios nuevos lenguajes de aproximación al tema proyectual por el deseo globalizador que implican estas arquitecturas, esto es en el periodo de la arquitectura moderna. Se utiliza como elemento calibrador El poema del ángulo recto de Le Corbusier. Este documento gráfico no sólo nos da la clave que bullía en este periodo con respecto a una nueva aproximación a la superficie terrestre y al medio. Este instrumento también sirve de catalizador entre lo real y lo ideal y es una síntesis de operaciones arquitectónicas, que mediante la comparación y/o oposición con los resultados previos del estudio arquetípico y de los Beatos, genera grandes grupos de características que se hallan entrelazadas en los proyectos paradisíacos. Gracias a estos documentos se puede concluir con una síntesis de características que comparten los proyectos paradisiacos, que en todo caso son esto, proyectos en plural. No existe la unicidad, ya que se infiere de este estudio que son, en conjunto, una forma de hacer ciertas arquitecturas. Tienen características medibles y reproducibles y unas condiciones tipológicas y de generación de campos que permiten producir muchos tipos de proyectos, todos ellos de tipología paradisíaca. ABSTRACT This thesis, Eden: Tale, Image and Project. The Concept of Terrestrial Paradise as a Generator of Architectures, is carried out with the objective of studying the relations between the idea of Eden or Paradise and architecture. In every case working from the three levels of representation, tale, image and project. On the approximation to the object of study, the investigation is centered on the tale itself, and in the same core of it, are found some implications that relate it with the world of mythology and archetype. These initial results consist of detecting that each one of the parts that form the edenic set; have previously been objects of cult in religions of a pagan or shamanic nature, since pre-historic times. Water, animals, earth and the grooves of the orchard, have all been objects of reverence since the dawn of time. Tracing the genealogy of these objects an archetypal type of analysis is taken on, which relates the revered objects with the subconscious and with the spontaneous manifestation of these in reality. These studies also provide results with spatial and architectonic implications, and it is concluded that more than an ideal or a concrete place, in myth or reality, what it definitely seems (and shows to be) is an architectonic typology, related in its formal and theoretical structure with that of the enclosed garden. The manifestation in image of these results, and the investigation itself, lead to reach for one of the most primitive set of images of the Garden of Eden, and that are in fact previous to the “invention” of the Hortus Conclusus as such. A collection of mozárabe representations of the Terrestrial Paradise as a concrete place on Earth. These are the mappaemundi included in the Codexes of the Beatos. By their study, the formal and theoretical structure of paradisal architectures is understood. Their importance in occidental culture makes these documents bring out their side as exoskeletons of the paradisal Project. Also, for their variety, they cast a great number of results of a spatial nature. The results released by the study of the edenic representations of the Beatos take the investigation to another moment and set of images within the history of architecture, where the radicality of the approaches to architecture, and the tabula rasa in relation to previous architectures make it necessary to invent new languages of approximation to the subject of project. This happens because of the globalizing tones that imply these architectures. The lapse of time referred to would be during the general practice of modern architecture. The object of calibration would be The Poem of the Right Angle, by Le Corbusier. The graphic document not only gives us the key of what flowed during that time when a new approximation to the surface of earth and to the environment was retrieved. This instrument also serves as a catalyst between real and ideal, and it is a synthesis of architectural operations that by comparison and/or opposition with the previous results of the archetypal study, and of the Beatos, generates large groups of characteristics that are intertwined in the paradisal project. Due to these documents we can conclude the investigation with a series of characteristics that share the paradisal projects, which in any case are this, projects, in plural. There is no uniqueness. From these findings, it can be inferred that a Paradisal Project is a way to undertake a project, and certain kinds of architecture. They are measurable and are underlaid by a consistent pattern, they also have typological conditions and of field generation, that make it possible to produce many kinds of projects, all of a paradisal typology.