22 resultados para computer simulations

em Universidad Politécnica de Madrid


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Management of certain populations requires the preservation of its pure genetic background. When, for different reasons, undesired alleles are introduced, the original genetic conformation must be recovered. The present study tested, through computer simulations, the power of recovery (the ability for removing the foreign information) from genealogical data. Simulated scenarios comprised different numbers of exogenous individuals taking partofthe founder population anddifferent numbers of unmanaged generations before the removal program started. Strategies were based on variables arising from classical pedigree analyses such as founders? contribution and partial coancestry. The ef?ciency of the different strategies was measured as the proportion of native genetic information remaining in the population. Consequences on the inbreeding and coancestry levels of the population were also evaluated. Minimisation of the exogenous founders? contributions was the most powerful method, removing the largest amount of genetic information in just one generation.However, as a side effect, it led to the highest values of inbreeding. Scenarios with a large amount of initial exogenous alleles (i.e. high percentage of non native founders), or many generations of mixing became very dif?cult to recover, pointing out the importance of being careful about introgression events in population

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To study the fluid motion-vehicle dynamics interaction, a model of four, liquid filled two-axle container freight wagons was set up. The railway vehicle has been modelled as a multi-body system (MBS). To include fluid sloshing, an equivalent mechanical model has been developed and incorporated. The influence of several factors has been studied in computer simulations, such as track defects, curve negotiation, train velocity, wheel wear, liquid and solid wagonload, and container baffles. SIMPACK has been used for MBS analysis, and ANSYS for liquid sloshing modelling and equivalent mechanical systems validation. Acceleration and braking manoeuvres of the freight train set the liquid cargo into motion. This longitudinal sloshing motion of the fluid cargo inside the tanks initiated a swinging motion of some components of the coupling gear. The coupling gear consists of UIC standard traction hooks and coupling screws that are located between buffers. One of the coupling screws is placed in the traction hook of the opposite wagon thus joining the two wagons, whereas the unused coupling screw rests on a hanger. Simulation results showed that, for certain combinations of type of liquid, filling level and container dimensions, the liquid cargo could provoke an undesirable, although not hazardous, release of the unused coupling screw from its hanger. The coupling screw's release was especially obtained when a period of acceleration was followed by an abrupt braking manoeuvre at 1 m/s2. It was shown that a resonance effect between the liquid's oscillation and the coupling screw's rotary motion could be the reason for the coupling screw's undesired release. Possible solutions to avoid the phenomenon are given.Acceleration and braking manoeuvres of the freight train set the liquid cargo into motion. This longitudinal sloshing motion of the fluid cargo inside the tanks initiated a swinging motion of some components of the coupling gear. The coupling gear consists of UIC standard traction hooks and coupling screws that are located between buffers. One of the coupling screws is placed in the traction hook of the opposite wagon thus joining the two wagons, whereas the unused coupling screw rests on a hanger. This paper reports on a study of the fluid motion-train vehicle dynamics interaction. In the study, a model of four, liquid-filled two-axle container freight wagons was developed. The railway vehicle has been modeled as a multi-body system (MBS). To include fluid sloshing, an equivalent mechanical model has been developed and incorporated. The influence of several factors has been studied in computer simulations, such as track defects, curve negotiation, train velocity, wheel wear, liquid and solid wagonload, and container baffles. A simulation program was used for MBS analysis, and a finite element analysis program was used for liquid sloshing modeling and equivalent mechanical systems validation. Acceleration and braking maneuvers of the freight train set the liquid cargo into motion. This longitudinal sloshing motion of the fluid cargo inside the tanks initiated a swinging motion of some components of the coupling gear. Simulation results showed that, for certain combinations of type of liquid, filling level and container dimensions, the liquid cargo could provoke an undesirable, although not hazardous, release of an unused coupling screw from its hanger. It was shown that a resonance effect between the liquid's oscillation and the coupling screw's rotary motion could be the reason for the coupling screw's undesired release. Solutions are suggested to avoid the resonance problem, and directions for future research are given.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Virtual certification partially substitutes by computer simulations the experimental techniques required for rail vehicle certification. In this paper, several works were these techniques were used in the vehicle design and track maintenance processes are presented. Dynamic simulation of multibody systems was used to virtually apply the EN14363 standard to certify the dynamic behaviour of vehicles. The works described are: assessment of a freight bogie design adapted to meter-gauge, assessment of a railway track layout for a subway network, freight bogie design with higher speed and axle load, and processing of the data acquired by a track recording vehicle for track maintenance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A novel HCPV nonimaging concentrator concept with high concentration (>500×) is presented. It uses the combination of a commercial concentration GaInP∕GaInAs∕Ge 3J cell and a concentration Back‐Point‐Contact (BPC) concentration silicon cell for efficient spectral utilization, and external confinement techniques for recovering the 3J cell′s reflection. The primary optical element (POE) is a flat Fresnel lens and the secondary optical element (SOE) is a free‐form RXI‐type concentrator with a band‐pass filter embedded it, both POE and SOE performing Köhler integration to produce light homogenization. The band‐pass filter sends the IR photons in the 900–1200 nm band to the silicon cell. Computer simulations predict that four‐terminal terminal designs could achieve ∼46% added cell efficiencies using commercial 39% 3J and 26% Si cells. A first proof‐of concept receiver prototype has been manufactured using a simpler optical architecture (with a lower concentration, ∼ 100× and lower simulated added efficiency), and experimental measurements have shown up to 39.8% 4J receiver efficiency using a 3J with peak efficiency of 36.9%

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this correspondence, the conditions to use any kind of discrete cosine transform (DCT) for multicarrier data transmission are derived. The symmetric convolution-multiplication property of each DCT implies that when symmetric convolution is performed in the time domain, an element-by-element multiplication is performed in the corresponding discrete trigonometric domain. Therefore, appending symmetric redun-dancy (as prefix and suffix) into each data symbol to be transmitted, and also enforcing symmetry for the equivalent channel impulse response, the linear convolution performed in the transmission channel becomes a symmetric convolution in those samples of interest. Furthermore, the channel equalization can be carried out by means of a bank of scalars in the corresponding discrete cosine transform domain. The expressions for obtaining the value of each scalar corresponding to these one-tap per subcarrier equalizers are presented. This study is completed with several computer simulations in mobile broadband wireless communication scenarios, considering the presence of carrier frequency offset (CFO). The obtained results indicate that the proposed systems outperform the standardized ones based on the DFT.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In order to achieve total selectivity at electrical distribution networks it is of great importance to analyze the defect currents at ungrounded power systems. This information will help to grant selectivity at electrical distribution networks ensuring that only the defect line or feeder is removed from service. In the present work a new selective and directional protection method for ungrounded power systems is evaluated. The new method measures only defect currents to detect earth faults and works with a directional criterion to determine the line under faulty conditions. The main contribution of this new technique is that it can detect earth faults in outgoing lines at any type of substation avoiding the possible mismatch of traditional directional earth fault relays. This detection technique is based on the comparison of the direction of a reference current to the direction of all earth fault capacitive currents at all the feeders connected to the same bus bars. This new method has been validated through computer simulations. The results for the different cases studied are remarkable, proving total validity and usefulness of the new method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El objetivo de este trabajo es el diseño acústico de una sala multifuncional. Los aspectos tratados en el mismo son todos los relacionados con el acondicionamiento acústico de un recinto sin el empleo de refuerzo sonoro. Tras describir brevemente las necesidades concretas de cada tipo de sala según el espectáculo al que está destinada, el presente documento muestra un ejemplo de diseño adaptable para este tipo de salas: - Sala de conferencias / Aula - Teatro - Teatro de ópera - Sala de conciertos de cámara - Sala de conciertos orquestales Para conseguir la adaptación necesaria, se ha diseñado un sistema de paneles móviles con el fin de adaptar la forma y el volumen de la sala. Dicho sistema trata de ajustar, dentro de lo posible, las características de la misma a las deseadas para conseguir una sala de calidad en cada uno de los ámbitos anteriormente mencionados. Todo el proceso de diseño se ha basado en la simulación por ordenador de diversos modelos que representan en cada caso la disposición final de la sala, una vez adaptada al espectáculo que se desee representar. Dicha simulación se ha realizado mediante el programa Odeon, obteniendo de ella una serie de resultados fiables y auralizaciones que demuestran la calidad de la sala diseñada. SUMMARY. The goal of this work is the acoustic design of a multifunctional room. The aspects covered in it are all associated with the acoustic treatment of a room without the employment of sound reinforcement. After describing briefly the specific needs of each type of room according to the show that is destined, this paper shows a design example customizable for this type of rooms: - Conference room / Aula - Theatre - Opera House - Chamber Concert Hall -Orchestral Concert Hall To achieve the necessary adjustment, a system of movable panels has been designed in order to adapt the shape and volume of the room. Such system comes to adjust, as far as possible, room features to obtain the desired quality room in each area mentioned above. The entire design process was based on computer simulations of various models representing in each case the final disposition of the room, once adapted to the show you want to represent. This simulation has been deployed under Odeon software, obtaining a group of reliable results and auralisations that demonstrate the quality of the room designed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

“Por lo tanto, la cristalización de polímeros se supone, y en las teorías se describe a menudo, como un proceso de múltiples pasos con muchos aspectos físico-químicos y estructurales influyendo en él. Debido a la propia estructura de la cadena, es fácil entender que un proceso que es termodinámicamente forzado a aumentar su ordenamiento local, se vea obstaculizado geométricamente y, por tanto, no puede conducirse a un estado de equilibrio final. Como resultado, se forman habitualmente estructuras de no equilibrio con diferentes características dependiendo de la temperatura, presión, cizallamiento y otros parámetros físico-químicos del sistema”. Estas palabras, pronunciadas recientemente por el profesor Bernhard Wunderlich, uno de los mas relevantes fisico-quimicos que han abordado en las ultimas décadas el estudio del estado físico de las macromoléculas, adelantan lo que de alguna manera se explicita en esta memoria y constituyen el “leitmotiv” de este trabajo de tesis. El mecanismo de la cristalización de polímeros esta aun bajo debate en la comunidad de la física de polímeros y la mayoría de los abordajes experimentales se explican a través de la teoría LH. Esta teoría clásica debida a Lauritzen y Hoffman (LH), y que es una generalización de la teoría de cristalización de una molécula pequeña desde la fase de vapor, describe satisfactoriamente muchas observaciones experimentales aunque esta lejos de explicar el complejo fenómeno de la cristalización de polímeros. De hecho, la formulación original de esta teoría en el National Bureau of Standards, a comienzos de la década de los 70, sufrió varias reformulaciones importantes a lo largo de la década de los 80, buscando su adaptación a los hallazgos experimentales. Así nació el régimen III de cristalización que posibilita la creacion de nichos moleculares en la superficie y que dio pie al paradigma ofrecido por Sadler y col., para justificar los experimentos que se obtenian por “scattering” de neutrones y otras técnicas como la técnica de “droplets” o enfriamiento rapido. Por encima de todo, el gran éxito de la teoría radica en que explica la dependencia inversa entre el tamaño del plegado molecular y el subenfriamiento, definido este ultimo como el intervalo de temperatura que media entre la temperatura de equilibrio y la temperatura de cristalización. El problema concreto que aborda esta tesis es el estudio de los procesos de ordenamiento de poliolefinas con distinto grado de ramificacion mediante simulaciones numéricas. Los copolimeros estudiados en esta tesis se consideran materiales modelo de gran homogeneidad molecular desde el punto de vista de la distribución de tamaños y de ramificaciones en la cadena polimérica. Se eligieron estas poliolefinas debido al gran interes experimental en conocer el cambio en las propiedades fisicas de los materiales dependiendo del tipo y cantidad de comonomero utilizado. Además, son modelos sobre los que existen una ingente cantidad de información experimental, que es algo que preocupa siempre al crear una realidad virtual como es la simulación. La experiencia en el grupo Biophym es que los resultados de simulación deben de tener siempre un correlato mas o menos próximo experimental y ese argumento se maneja a lo largo de esta memoria. Empíricamente, se conoce muy bien que las propiedades físicas de las poliolefinas, en suma dependen del tipo y de la cantidad de ramificaciones que presenta el material polimérico. Sin embargo, tal como se ha explicado no existen modelos teóricos adecuados que expliquen los mecanismos subyacentes de los efectos de las ramas. La memoria de este trabajo es amplia por la complejidad del tema. Se inicia con una extensa introducción sobre los conceptos básicos de una macromolecula que son relevantes para entender el contenido del resto de la memoria. Se definen los conceptos de macromolecula flexible, distribuciones y momentos, y su comportamiento en disolución y fundido con los correspondientes parametros caracteristicos. Se pone especial énfasis en el concepto de “entanglement” o enmaranamiento por considerarse clave a la hora de tratar macromoléculas con una longitud superior a la longitud critica de enmaranamiento. Finaliza esta introducción con una reseña sobre el estado del arte en la simulación de los procesos de cristalización. En un segundo capitulo del trabajo se expone detalladamente la metodología usada en cada grupo de casos. En el primer capitulo de resultados, se discuten los estudios de simulación en disolución diluida para sistemas lineales y ramificados de cadena única. Este caso mas simple depende claramente del potencial de torsión elegido tal como se discute a lo largo del texto. La formación de los núcleos “babys” propuestos por Muthukumar parece que son consecuencia del potencial de torsión, ya que este facilita los estados de torsión mas estables. Así que se propone el análisis de otros potenciales que son igualmente utilizados y los resultados obtenidos sobre la cristalización, discutidos en consecuencia. Seguidamente, en un segundo capitulo de resultados se estudian moleculas de alcanos de cadena larga lineales y ramificados en un fundido por simulaciones atomisticas como un modelo de polietileno. Los resultados atomisticos pese a ser de gran detalle no logran captar en su totalidad los efectos experimentales que se observan en los fundidos subenfriados en su etapa previa al estado ordenado. Por esta razon se discuten en los capítulos 3 y 4 de resultados sistemas de cadenas cortas y largas utilizando dos modelos de grano grueso (CG-PVA y CG-PE). El modelo CG-PE se desarrollo durante la tesis. El uso de modelos de grano grueso garantiza una mayor eficiencia computacional con respecto a los modelos atomísticos y son suficientes para mostrar los fenómenos a la escala relevante para la cristalización. En todos estos estudios mencionados se sigue la evolución de los procesos de ordenamiento y de fusión en simulaciones de relajación isoterma y no isoterma. Como resultado de los modelos de simulación, se han evaluado distintas propiedades fisicas como la longitud de segmento ordenado, la cristalinidad, temperaturas de fusion/cristalizacion, etc., lo que permite una comparación con los resultados experimentales. Se demuestra claramente que los sistemas ramificados retrasan y dificultan el orden de la cadena polimérica y por tanto, las regiones cristalinas ordenadas decrecen al crecer las ramas. Como una conclusión general parece mostrarse una tendencia a la formación de estructuras localmente ordenadas que crecen como bloques para completar el espacio de cristalización que puede alcanzarse a una temperatura y a una escala de tiempo determinada. Finalmente hay que señalar que los efectos observados, estan en concordancia con otros resultados tanto teoricos/simulacion como experimentales discutidos a lo largo de esta memoria. Su resumen se muestra en un capitulo de conclusiones y líneas futuras de investigación que se abren como consecuencia de esta memoria. Hay que mencionar que el ritmo de investigación se ha acentuado notablemente en el ultimo ano de trabajo, en parte debido a las ventajas notables obtenidas por el uso de la metodología de grano grueso que pese a ser muy importante para esta memoria no repercute fácilmente en trabajos publicables. Todo ello justifica que gran parte de los resultados esten en fase de publicación. Abstract “Polymer crystallization is therefore assumed, and in theories often described, to be a multi step process with many influencing aspects. Because of the chain structure, it is easy to understand that a process which is thermodynamically forced to increase local ordering but is geometrically hindered cannot proceed into a final equilibrium state. As a result, nonequilibrium structures with different characteristics are usually formed, which depend on temperature, pressure, shearing and other parameters”. These words, recently written by Professor Bernhard Wunderlich, one of the most prominent researchers in polymer physics, put somehow in value the "leitmotiv "of this thesis. The crystallization mechanism of polymers is still under debate in the physics community and most of the experimental findings are still explained by invoking the LH theory. This classical theory, which was initially formulated by Lauritzen and Hoffman (LH), is indeed a generalization of the crystallization theory for small molecules from the vapor phase. Even though it describes satisfactorily many experimental observations, it is far from explaining the complex phenomenon of polymer crystallization. This theory was firstly devised in the early 70s at the National Bureau of Standards. It was successively reformulated along the 80s to fit the experimental findings. Thus, the crystallization regime III was introduced into the theory in order to explain the results found in neutron scattering, droplet or quenching experiments. This concept defines the roughness of the crystallization surface leading to the paradigm proposed by Sadler et al. The great success of this theory is the ability to explain the inverse dependence of the molecular folding size on the supercooling, the latter defined as the temperature interval between the equilibrium temperature and the crystallization temperature. The main scope of this thesis is the study of ordering processes in polyolefins with different degree of branching by using computer simulations. The copolymers studied along this work are considered materials of high molecular homogeneity, from the point of view of both size and branching distributions of the polymer chain. These polyolefins were selected due to the great interest to understand their structure– property relationships. It is important to note that there is a vast amount of experimental data concerning these materials, which is essential to create a virtual reality as is the simulation. The Biophym research group has a wide experience in the correlation between simulation data and experimental results, being this idea highly alive along this work. Empirically, it is well-known that the physical properties of the polyolefins depend on the type and amount of branches presented in the polymeric material. However, there are not suitable models to explain the underlying mechanisms associated to branching. This report is extensive due to the complexity of the topic under study. It begins with a general introduction to the basics concepts of macromolecular physics. This chapter is relevant to understand the content of the present document. Some concepts are defined along this section, among others the flexibility of macromolecules, size distributions and moments, and the behavior in solution and melt along with their corresponding characteristic parameters. Special emphasis is placed on the concept of "entanglement" which is a key item when dealing with macromolecules having a molecular size greater than the critical entanglement length. The introduction finishes with a review of the state of art on the simulation of crystallization processes. The second chapter of the thesis describes, in detail, the computational methodology used in each study. In the first results section, we discuss the simulation studies in dilute solution for linear and short chain branched single chain models. The simplest case is clearly dependent on the selected torsion potential as it is discussed throughout the text. For example, the formation of baby nuclei proposed by Mutukhumar seems to result from the effects of the torsion potential. Thus, we propose the analysis of other torsion potentials that are also used by other research groups. The results obtained on crystallization processes are accordingly discussed. Then, in a second results section, we study linear and branched long-chain alkane molecules in a melt by atomistic simulations as a polyethylene-like model. In spite of the great detail given by atomistic simulations, they are not able to fully capture the experimental facts observed in supercooled melts, in particular the pre-ordered states. For this reason, we discuss short and long chains systems using two coarse-grained models (CG-PVA and CG-PE) in section 3 and 4 of chapter 2. The CG-PE model was developed during the thesis. The use of coarse-grained models ensures greater computational efficiency with respect to atomistic models and is enough to show the relevant scale phenomena for crystallization. In all the analysis we follow the evolution of the ordering and melting processes by both isothermal and non isothermal simulations. During this thesis we have obtained different physical properties such as stem length, crystallinity, melting/crystallization temperatures, and so on. We show that branches in the chains cause a delay in the crystallization and hinder the ordering of the polymer chain. Therefore, crystalline regions decrease in size as branching increases. As a general conclusion, it seems that there is a tendency in the macromolecular systems to form ordered structures, which can grown locally as blocks, occupying the crystallization space at a given temperature and time scale. Finally it should be noted that the observed effects are consistent with both, other theoretical/simulation and experimental results. The summary is provided in the conclusions chapter along with future research lines that open as result of this report. It should be mentioned that the research work has speeded up markedly in the last year, in part because of the remarkable benefits obtained by the use of coarse-grained methodology that despite being very important for this thesis work, is not easily publishable by itself. All this justify that most of the results are still in the publication phase.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study the evolution of a finite size population formed by mutationally isolated lineages of error-prone replicators in a two-peak fitness landscape. Computer simulations are performed to gain a stochastic description of the system dynamics. More specifically, for different population sizes, we compute the probability of each lineage being selected in terms of their mutation rates and the amplification factors of the fittest phenotypes. We interpret the results as the compromise between the characteristic time a lineage takes to reach its fittest phenotype by crossing the neutral valley and the selective value of the sequences that form the lineages. A main conclusion is drawn: for finite population sizes, the survival probability of the lineage that arrives first to the fittest phenotype rises significantly

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis contiene una investigación detallada sobre las características y funcionamiento de las máquinas de medición por visión. El objetivo fundamental es modelar su comportamiento y dotarlas de trazabilidad metrológica bajo cualquier condición de medida. Al efecto, se ha realizado un exhaustivo análisis de los elementos que conforman su cadena de medición, a saber: sistema de iluminación, estructura, lentes y objetivos, cámara, software de tratamiento de imágenes y software de cálculo. Se han definido modelos físico-matemáticos, de desarrollo propio, capaces de simular con fiabilidad el comportamiento de los elementos citados, agrupados, a efectos de análisis numérico, en dos subsistemas denominados: de visión y mecánico. Se han implementado procedimientos de calibración genuinos para ambos subsistemas mediante el empleo de patrones ópticos. En todos los casos se ha podido determinar la incertidumbre asociada a los diferentes parámetros involucrados, garantizando la trazabilidad metrológica de los resultados. Los distintos modelos desarrollados han sido implementados en Matlab®. Se ha verificado su validez empleando valores sintéticos obtenidos a partir de simulaciones informáticas y también con imágenes reales capturadas en el laboratorio. El estudio experimental y validación definitiva de los resultados se ha realizado en el Laboratorio de Longitud del Centro Español de Metrología y en el Laboratorio de Metrología Dimensional de la ETS de Ingeniería y Diseño Industrial de la UPM. Los modelos desarrollados se han aplicado a dos máquinas de medición por visión de diferentes características constructivas y metrológicas. Empleando dichas máquinas se han medido distintas piezas, pertenecientes a los ámbitos mecánico y oftalmológico. Los resultados obtenidos han permitido la completa caracterización dimensional de dichas piezas y la determinación del cumplimiento de las especificaciones metrológicas en todos los casos, incluyendo longitudes, formas y ángulos. ABSTRACT This PhD thesis contains a detailed investigation of characteristics and performance of the optical coordinate measurement machines. The main goal is to model their behaviour and provide metrological traceability to them under any measurement conditions. In fact, a thorough analysis of the elements which form the measuring chain, i.e.: lighting system, structure, lenses and objectives, camera, image processing software and coordinate metrology software has conducted. Physical-mathematical models, of self-developed, able to simulate with reliability the behavior of the above elements, grouped, for the purpose of numerical analysis, in two subsystems called: “vision subsystem” and “mechanical subsystem”, have been defined. Genuine calibration procedures for both subsystems have been implemented by use of optical standards. In all cases, it has been possible to determine the uncertainty associated with the different parameters involved, ensuring metrological traceability of results. Different developed models have been implemented in Matlab®. Their validity has been verified using synthetic values obtained from computer simulations and also with real images captured in laboratory. The experimental study and final validation of the results was carried out in the Length Laboratory of “Centro Español de Metrología” and Dimensional Metrology Laboratory of the “Escuela Técnica Superior de Ingeniería y Diseño Industrial” of the UPM. The developed models have been applied to two optical coordinate measurement machines with different construction and metrological characteristics. Using such machines, different parts, belonging to the mechanical and ophthalmologist areas, have been measured. The obtained results allow the full dimensional characterization of such parts and determination of compliance with metrological specifications in all cases, including lengths, shapes and angles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Hall Effect Thruster (HET) is a type of satellite electric propulsion device initially developed in the 1960’s independently by USA and the former USSR. The development continued in the shadow during the 1970’s in the Soviet Union to reach a mature status from the technological point of view in the 1980’s. In the 1990’s the advanced state of this Russian technology became known in western countries, which rapidly restarted the analysis and development of modern Hall thrusters. Currently, there are several companies in USA, Russia and Europe manufacturing Hall thrusters for operational use. The main applications of these thrusters are low-thrust propulsion of interplanetary probes, orbital raising of satellites and stationkeeping of geostationary satellites. However, despite the well proven in-flight experience, the physics of the Hall Thruster are not completely understood yet. Over the last two decades large efforts have been dedicated to the understanding of the physics of Hall Effect thrusters. However, the so-called anomalous diffusion, short name for an excessive electron conductivity along the thruster, is not yet fully understood as it cannot be explained with classical collisional theories. One commonly accepted explanation is the existence of azimuthal oscillations with correlated plasma density and electric field fluctuations. In fact, there is experimental evidence of the presence of an azimuthal oscillation in the low frequency range (a few kHz). This oscillation, usually called spoke, was first detected empirically by Janes and Lowder in the 1960s. More recently several experiments have shown the existence of this type of oscillation in various modern Hall thrusters. Given the frequency range, it is likely that the ionization is the cause of the spoke oscillation, like for the breathing mode oscillation. In the high frequency range (a few MHz), electron-drift azimuthal oscillations have been detected in recent experiments, in line with the oscillations measured by Esipchuk and Tilinin in the 1970’s. Even though these low and high frequency azimuthal oscillations have been known for quite some time already, the physics behind them are not yet clear and their possible relation with the anomalous diffusion process remains an unknown. This work aims at analysing from a theoretical point of view and via computer simulations the possible relation between the azimuthal oscillations and the anomalous electron transport in HET. In order to achieve this main objective, two approaches are considered: local linear stability analyses and global linear stability analyses. The use of local linear stability analyses shall allow identifying the dominant terms in the promotion of the oscillations. However, these analyses do not take into account properly the axial variation of the plasma properties along the thruster. On the other hand, global linear stability analyses do account for these axial variations and shall allow determining how the azimuthal oscillations are promoted and their possible relation with the electron transport.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En esta tesis se desarrolla un modelo físico-matemático, original, que permite simular el comportamiento de las máquinas de visión, en particular las máquinas ópticas digitales, cuando reciben información a través de la luz reflejada por los mensurandos. El modelo desarrollado se lia aplicado para la determinación de los parámetros que intervienen en el proceso de caracterización de formas geométricas básicas, tales como líneas, círculos y elipses. También se analizan las fuentes de error que intervienen a lo largo de la cadena metrológica y se proponen modelos de estimación de las incertidumbres de medida a través un nuevo enfoque basado en estadística bayesiana y resolución subpíxel. La validez del modelo se ha comprobado por comparación de los resultados teóricos, obtenidos a partir de modelos virtuales y simulaciones informáticas, y los reales, obtenidos mediante la realización de medidas de diferentes mensurandos del ámbito electromecánico y de dimensiones submilimétricas. Utilizando el modelo propuesto, es posible caracterizar adecuadamente mensurandos a partir del filtrado, segmentación y tratamiento matemático de las imágenes. El estudio experimental y validación definitiva de los resultados se ha realizado en el Laboratorio de Metrología Dimensional de la Escuela Técnica Superior de Ingeniería y Diseño Industrial de la Universidad Politécnica de Madrid. Los modelos desarrollados se han implementado sobre imágenes obtenidas con la máquina de visión marca TESA, modelo VISIO 300. Abstract In this PhD Thesis an original mathematic-physic model has been developed. It allows simulating the behaviour of the vision measuring machines, in particular the optical digital machines, where they receive information through the light reflected by the measurands. The developed model has been applied to determine the parameters involved in the process of characterization of basic geometrical features such as lines, circles and ellipses. The error sources involved along the metrological chain also are analyzed and new models for estimating measurement uncertainties through a new approach based on Bayesian statistics and subpixel resolution are proposed. The validity of the model has been verified by comparing the theoretical results obtained from virtual models and computer simulations, with actual ones, obtained by measuring of various measurands belonging to the electromechanical field and of submillimeter dimensions. Using the proposed model, it is possible to properly characterize measurands from filtering, segmentation and mathematical processing of images. The experimental study and final validation of the results has been carried out in the "Laboratorio de Metrología Dimensional" (Dimensional Metrology Laboratory) at the Escuela Técnica Superior de Ingeniería y Diseño Industrial (ETSIDI) (School of Engineering and Industrial Design) at Universidad Politécnica de Madrid (UPM). The developed models have been implemented on images obtained with the vision measuring machine of the brand TESA, model VISIO 300.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La edificación residencial existente en España y en Europa se encuentra abocada a una rehabilitación profunda para cumplir los objetivos marcados en la estrategia europea para el año 2050. Estos, para el sector de la edificación, se proponen una reducción del 90% de emisiones de gases de efecto invernadero (GEI) respecto a niveles del año 1990. Este plan a largo plazo establece hitos intermedios de control, con objetivos parciales para el año 2020 y 2030. El objetivo último es aprovechar el potencial de reducción de demanda energética del sector de la edificación, del cual la edificación residencial supone el 85% en España. Dentro de estos requerimientos, de reducción de demanda energética en la edificación, la ventilación en la edificación residencial se convierte en uno de los retos a resolver por su vinculación directa a la salud y el confort de los ocupantes de la misma, y al mismo tiempo su relación proporcional con la demanda energética que presenta el edificio asociada al acondicionamiento térmico. Gran parte de las pérdidas térmicas de la edificación residencial se producen por el aire de renovación y la infiltración de aire a través de la envolvente. La directiva europea de eficiencia energética de la edificación (EPBD), que establece las directrices necesarias para alcanzar los objetivos de este sector en cuanto a emisiones de CO2 y gases de efecto invernadero (GEI), contempla la ventilación con aire limpio como un requisito fundamental a tener en cuenta de cara a las nuevas construcciones y a la rehabilitación energética de los edificios existentes. El síndrome del edificio enfermo, un conjunto de molestias y síntomas asociados a la baja calidad del aire de edificios no residenciales que surgió a raíz de la crisis del petróleo de 1973, tuvo su origen en una ventilación deficiente y una renovación del aire interior insuficiente de estos edificios, producto del intento de ahorro en la factura energética. Teniendo en cuenta que, de media, pasamos un 58% de nuestro tiempo en las viviendas, es fundamental cuidar la calidad del aire interior y no empeorarla aplicando medidas de “eficiencia energética” con efectos no esperados. Para conseguir esto es fundamental conocer en profundidad cómo se produce la ventilación en la edificación en bloque en España en sus aspectos de calidad del aire interior y demanda energética asociada a la ventilación. El objetivo de esta tesis es establecer una metodología de caracterización y de optimización de las necesidades de ventilación para los espacios residenciales existentes en España que aúne el doble objetivo de garantizar la calidad ambiental y reducir la demanda energética de los mismos. La caracterización del parque edificatorio residencial español en cuanto a ventilación es concluyente: La vivienda en España se distribuye principalmente en tres periodos en los que se encuentran más del 80% del total de las viviendas construidas. El periodo anterior a las normas básicas de la edificación (NBE), de 1960 a 1980, el periodo desde 1980 al año 2005, con el mayor número total de viviendas construidas, guiado por la NTE ISV 75, y el periodo correspondiente a la edificación construida a partir del Código Técnico de la Edificación, en 2006, cuyo documento básico de condiciones de salubridad (DB HS3) es la primera norma de obligado cumplimiento en diseño y dimensionamiento de ventilación residencial en España. La selección de un modelo de bloque de viviendas de referencia, un valor medio y representativo, seleccionado de entre estos periodos, pero con cualidades que se extienden más allá de uno de ellos, nos permite realizar un intensivo análisis comparativo de las condiciones de calidad de aire interior y la demanda energética del mismo, aplicando las distintas configuraciones que presenta la ventilación en viviendas dependiendo del escenario o época constructiva (o normativa) en que esta fuera construida. Este análisis se lleva a cabo apoyándose en un doble enfoque: el modelado numérico de simulaciones y el análisis de datos experimentales, para comprobar y afinar los modelos y observar la situación real de las viviendas en estos dos aspectos. Gracias a las conclusiones del análisis previo, se define una estrategia de optimización de la ventilación basada fundamentalmente en dos medidas: 1) La introducción de un sistema de extracción mecánica y recuperación de calor que permita reducir la demanda energética debida a la renovación del aire y a la vez diluir los contaminantes interiores más eficazmente para mejorar, de esta forma, la calidad del ambiente interior. 2) La racionalización del horario de utilización de estos sistemas, no malgastando la energía en periodos de no ocupación, permitiendo una leve ventilación de fondo, debida a la infiltración, que no incida en pérdidas energéticas cuantiosas. A esta optimización, además de aplicar la metodología de análisis previo, en cuanto a demanda energética y calidad del aire, se aplica una valoración económica integradora y comparativa basada en el reglamento delegado EU244/2012 de coste óptimo (Cost Optimal Methodology). Los resultados principales de esta tesis son: • Un diagnóstico de la calidad del aire interior de la edificación residencial en España y su demanda energética asociada, imprescindible para lograr una rehabilitación energética profunda garantizando la calidad del aire interior. • Un indicador de la relación directa entre calidad de aire y demanda energética, para evaluar la adecuación de los sistemas de ventilación, respecto de las nuevas normativas de eficiencia energética y ventilación. • Una estrategia de optimización, que ofrece una alternativa de intervención, y la aplicación de un método de valoración que permite evaluar la amortización comparada de la instalación de los sistemas. ABSTRACT The housing building stock already built in Spain and Europe faces a deep renovation in the present and near future to accomplish with the objectives agreed in the European strategy for 2050. These objectives, for the building sector, are set in a 90% of Green House Gases (GHG) reduction compared to levels in 1990. This long‐term plan has set milestones to control the correct advance of achievement in 2020 and 2030. The main objective is to take advantage of the great potential to reduce energy demand from the building sector, in which housing represents 85% share in Spain. Among this reduction on building energy demand requirements, ventilation of dwellings becomes one of the challenges to solve as it’s directly connected to the indoor air quality (IAQ) and comfort conditions for the users, as well as proportional to the building energy demand on thermal conditioning. A big share of thermal losses in housing is caused by air renovation and infiltration through the envelope leaks. The European Directive on Building energy performance (EPBD), establishes the roots needed to reach the building sector objectives in terms of CO2 and GHG emissions. This directive sets the ventilation and renovation with clean air of the new and existing buildings as a fundamental requirement. The Sick Building Syndrome (SBS), an aggregation of symptoms and annoys associated to low air quality in non residential buildings, appeared as common after the 1973 oil crisis. It is originated in defective ventilation systems and deficient air renovation rates, as a consequence of trying to lower the energy bill. Accounting that we spend 58% of our time in dwellings, it becomes crucial to look after the indoor air quality and focus in not worsening it by applying “energy efficient” measures, with not expected side effects. To do so, it is primary to research in deep how the ventilation takes place in the housing blocks in Spain, in the aspects related to IAQ and ventilation energy demand. This thesis main objective is to establish a characterization and optimization methodology regarding the ventilation needs for existing housing in Spain, considering the twofold objective of guaranteeing the air quality as reducing the energy demand. The characterization of the existing housing building stock in Spain regarding ventilation is conclusive. More of 80% of the housing stock is distributed in 3 main periods: before the implementation of the firsts regulations on building comfort conditions (Normas Básicas de la Edificación), from 1960 to 1980; the period after the first recommendations on ventilation (NTE ISV 75) for housing were set, around 1980 until 2005 and; the period corresponding to the housing built after the existing mandatory regulation in terms of indoor sanity conditions and ventilation (Spanish Building Code, DB HS3) was set, in 2006. Selecting a representative blueprint of a housing block in Spain, which has medium characteristics not just within the 3 periods mention, but which qualities extent beyond the 3 of them, allows the next step, analyzing. This comparative and intense analyzing phase is focused on the air indoor conditions and the related energy demand, applying different configurations to the ventilation systems according to the different constructive or regulation period in which the building is built. This analysis is also twofold: 1) Numerical modeling with computer simulations and 2) experimental data collection from existing housing in real conditions to check and refine the models to be tested. Thanks to the analyzing phase conclusions, an optimization strategy on the ventilation of the housing stock is set, based on two actions to take: 1) To introduce a mechanical exhaust and intake ventilation system with heat recovery that allows reducing energy demand, as improves the capacity of the system to dilute the pollutant load. This way, the environmental quality is improved. 2) To optimize the schedule of the system use, avoids waste of energy in no occupancy periods, relying ventilation during this time in a light infiltration ventilation, intended not to become large and not causing extra energy losses. Apart from applying the previous analyzing methodology to the optimization strategy, regarding energy demand and air quality, a ROI valorization is performed, based on the cost optimal methodology (delegated regulation EU244/2012). The main results from the thesis are: • To obtain a through diagnose regarding air quality and energy demand for the existing housing stock in Spain, unavoidable to reach a energy deep retrofitting scheme with no air quality worsening. • To obtain a marker to relate air quality and energy demand and evaluate adequateness of ventilation systems, for the new regulations to come. • To establish an optimization strategy to improve both air quality and energy demand, applying a compared valorization methodology to obtain the Return On Investment (ROI).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El vidrio se trata de un material muy apreciado en la arquitectura debido a la transparencia, característica que pocos materiales tienen. Pero, también es un material frágil, con una rotura inmediata cuando alcanza su límite elástico, sin disponer de un período plástico, que advierta de su futura rotura y permita un margen de seguridad. Por ambas razones, el vidrio se ha utilizado en arquitectura como elemento de plementería o relleno, desde tiempos antiguos, pero no como elemento estructural o portante, pese a que es un material interesante para los arquitectos para ese uso, por su característica de transparencia, ya que conseguiría la desmaterialización visual de la estructura, logrando espacios más ligeros y livianos. En cambio, si se tienen en cuenta las propiedades mecánicas del material se puede comprobar que dispone de unas características apropiadas para su uso estructural, ya que su Módulo elástico es similar al del aluminio, elemento muy utilizado en la arquitectura principalmente en las fachadas desde los últimos años, y su resistencia a compresión es muy superior incluso al hormigón armado; aunque su principal problema es su resistencia a tracción que es muy inferior a su resistencia a compresión, lo que penaliza su resistencia a flexión. En la actualidad se empieza a utilizar el vidrio como elemento portante o estructural, pero debido a su peor resistencia a flexión, se utilizan con grandes dimensiones que, a pesar de su transparencia, tienen una gran presencia. Por ello, la presente investigación pretende conseguir una reducción de las secciones de estos elementos estructurales de vidrio. Entonces, para el desarrollo de la investigación es necesario responder a una serie de preguntas fundamentales, cuyas respuestas serán el cuerpo de la investigación: 1. ¿Cuál es la finalidad de la investigación? El objetivo de esta investigación es la optimización de elementos estructurales de vidrio para su utilización en arquitectura. 2. ¿Cómo se va a realizar esa optimización? ¿Qué sistemas se van a utilizar? El sistema para realizar la optimización será la pretensión de los elementos estructurales de vidrio 3. ¿Por qué se va a utilizar la precompresión? Porque el vidrio tiene un buen comportamiento a compresión y un mal comportamiento a tracción lo que penaliza su utilización a flexión. Por medio de la precompresión se puede incrementar esta resistencia a tracción, ya que los primeros esfuerzos reducirán la compresión inicial hasta comenzar a funcionar a tracción, y por tanto aumentará su capacidad de carga. 4. ¿Con qué medios se va a comprobar y justificar ese comportamiento? Mediante simulaciones informáticas con programas de elementos finitos. 5. ¿Por qué se utilizará este método? Porque es una herramienta que arroja ventajas sobre otros métodos como los experimentales, debido a su fiabilidad, economía, rapidez y facilidad para establecer distintos casos. 6. ¿Cómo se garantiza su fiabilidad? Mediante el contraste de resultados obtenidos con ensayos físicos realizados, garantizando de ésta manera el buen comportamiento de los programas utilizados. El presente estudio tratará de responder a todas estas preguntas, para concluir y conseguir elementos estructurales de vidrio con secciones más reducidas gracias a la introducción de la precompresión, todo ello a través de las simulaciones informáticas por medio de elementos finitos. Dentro de estas simulaciones, también se realizarán comprobaciones y comparaciones entre distintas tipologías de programas para comprobar y contrastar los resultados obtenidos, intentando analizar cuál de ellos es el más idóneo para la simulación de elementos estructurales de vidrio. ABSTRACT Glass is a material very appreciated in architecture due to its transparency, feature that just a few materials share. But it is also a brittle material with an immediate breakage when it reaches its elastic limit, without having a plastic period that provides warning of future breakage allowing a safety period. For both reasons, glass has been used in architecture as infill panels, from old times. However, it has never been used as a structural or load‐bearing element, although it is an interesting material for architects for that use: because of its transparency, structural glass makes possible the visual dematerialization of the structure, achieving lighter spaces. However, taking into account the mechanical properties of the material, it is possible to check that it has appropriate conditions for structural use: its elastic modulus is similar to that of aluminium, element widely used in architecture, especially in facades from recent years; and its compressive strength is much higher than even the one of concrete. However, its main problem consists in its tensile strength that is much lower than its compressive strength, penalizing its resistance to bending. Nowadays glass is starting to be used as a bearing or structural element, but due to its worse bending strength, elements with large dimensions must be used, with a large presence despite its transparency. Therefore this research aims to get smaller sections of these structural glass elements. For the development of this thesis, it is necessary to answer a number of fundamental questions. The answers will be the core of this work: 1. What is the purpose of the investigation? The objective of this research is the optimization of structural glass elements for its use in architecture. 2. How are you going to perform this optimization? What systems will be implemented? The system for optimization is the pre‐stress of the structural elements of glass 3. Why are you going to use the pre‐compression? Because glass has a good resistance to compression and a poor tensile behaviour, which penalizes its use in bending elements. Through the pre‐compression it is possible to increase this tensile strength, due to the initial tensile efforts reducing the pre‐stress and increasing its load capacity. 4. What are the means that you will use in order to verify and justify this behaviour? The means are based on computer simulations with finite element programs (FEM) 5. Why do you use this method? Because it is a tool which gives advantages over other methods such as experimental: its reliability, economy, quick and easy to set different cases. 6. How the reliability is guaranteed? It’s guaranteed comparing the results of the simulation with the performed physical tests, ensuring the good performance of the software. This thesis will attempt to answer all these questions, to obtain glass structural elements with smaller sections thanks to the introduction of the pre‐compression, all through computer simulations using finite elements methods. In these simulations, tests and comparisons between different types of programs will also be implemented, in order to test and compare the obtained results, trying to analyse which one is the most suitable for the simulation of structural glass elements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new selective and non-directional protection method to detect ground faults in neutral isolated power systems. The new proposed method is based on the comparison of the rms value of the residual current of all the lines connected to a bus, and it is able to determine the line with ground defect. Additionally, this method can be used for the protection of secondary substation. This protection method avoids the unwanted trips produced by wrong settings or wiring errors, which sometimes occur in the existing directional ground fault protections. This new method has been validated through computer simulations and experimental laboratory tests.