43 resultados para High dynamic range
Resumo:
Collaborative efforts between the Neutronics and Target Design Group at the Instituto de Fusión Nuclear and the Molecular Spectroscopy Group at the ISIS Pulsed Neutron and Muon Source date back to 2012 in the context of the ESS-Bilbao project. The rationale for these joint activities was twofold, namely: to assess the realm of applicability of the low-energy neutron source proposed by ESS-Bilbao - for details; and to explore instrument capabilities for pulsed-neutron techniques in the range 0.05-3 ms, a time range where ESS-Bilbao and ISIS could offer a significant degree of synergy and complementarity. As part of this collaboration, J.P. de Vicente has spent a three-month period within the ISIS Molecular Spectroscopy Group, to gain hands-on experience on the practical aspects of neutron-instrument design and the requisite neutron-transport simulations. To date, these activities have resulted in a joint MEng thesis as well as a number of publications and contributions to national and international conferences. Building upon these previous works, the primary aim of this report is to provide a self-contained discussion of general criteria for instrument selection at ESS-Bilbao, the first accelerator-driven, low-energy neutron source designed in Spain. To this end, Chapter 1 provides a brief overview of the current design parameters of the accelerator and target station. Neutron moderation is covered in Chapter 2, where we take a closer look at two possible target-moderator-reflector configurations and pay special attention to the spectral and temporal characteristics of the resulting neutron pulses. This discussion provides a necessary starting point to assess the operation of ESSB in short- and long-pulse modes. These considerations are further explored in Chapter 3, dealing with the primary characteristics of ESS-Bilbao as a short- or long-pulse facility in terms of accessible dynamic range and spectral resolution. Other practical aspects including background suppression and the use of fast choppers are also discussed. The guiding principles introduced in the first three chapters are put to use in Chapter 4 where we analyse in some detail the capabilities of a small-angle scattering instrument, as well as how specific scientific requirements can be mapped onto the optimal use of ESS-Bilbao for condensed-matter research. Part 2 of the report contains additional supporting documentation, including a description of the ESSB McStas component, a detailed characterisation of moderator response and neutron pulses, and estimates ofparameters associated with the design and operation of neutron choppers. In closing this brief foreword, we wish to thank both ESS-Bilbao and ISIS for their continuing encouragement and support along the way.
Resumo:
Esta tesis trata sobre métodos de corrección que compensan la variación de las condiciones de iluminación en aplicaciones de imagen y video a color. Estas variaciones hacen que a menudo fallen aquellos algoritmos de visión artificial que utilizan características de color para describir los objetos. Se formulan tres preguntas de investigación que definen el marco de trabajo de esta tesis. La primera cuestión aborda las similitudes que se dan entre las imágenes de superficies adyacentes en relación a su comportamiento fotométrico. En base al análisis del modelo de formación de imágenes en situaciones dinámicas, esta tesis propone un modelo capaz de predecir las variaciones de color de la región de una determinada imagen a partir de las variaciones de las regiones colindantes. Dicho modelo se denomina Quotient Relational Model of Regions. Este modelo es válido cuando: las fuentes de luz iluminan todas las superficies incluídas en él; estas superficies están próximas entre sí y tienen orientaciones similares; y cuando son en su mayoría lambertianas. Bajo ciertas circunstancias, la respuesta fotométrica de una región se puede relacionar con el resto mediante una combinación lineal. No se ha podido encontrar en la literatura científica ningún trabajo previo que proponga este tipo de modelo relacional. La segunda cuestión va un paso más allá y se pregunta si estas similitudes se pueden utilizar para corregir variaciones fotométricas desconocidas en una región también desconocida, a partir de regiones conocidas adyacentes. Para ello, se propone un método llamado Linear Correction Mapping capaz de dar una respuesta afirmativa a esta cuestión bajo las circunstancias caracterizadas previamente. Para calcular los parámetros del modelo se requiere una etapa de entrenamiento previo. El método, que inicialmente funciona para una sola cámara, se amplía para funcionar en arquitecturas con varias cámaras sin solape entre sus campos visuales. Para ello, tan solo se necesitan varias muestras de imágenes del mismo objeto capturadas por todas las cámaras. Además, este método tiene en cuenta tanto las variaciones de iluminación, como los cambios en los parámetros de exposición de las cámaras. Todos los métodos de corrección de imagen fallan cuando la imagen del objeto que tiene que ser corregido está sobreexpuesta o cuando su relación señal a ruido es muy baja. Así, la tercera cuestión se refiere a si se puede establecer un proceso de control de la adquisición que permita obtener una exposición óptima cuando las condiciones de iluminación no están controladas. De este modo, se propone un método denominado Camera Exposure Control capaz de mantener una exposición adecuada siempre y cuando las variaciones de iluminación puedan recogerse dentro del margen dinámico de la cámara. Los métodos propuestos se evaluaron individualmente. La metodología llevada a cabo en los experimentos consistió en, primero, seleccionar algunos escenarios que cubrieran situaciones representativas donde los métodos fueran válidos teóricamente. El Linear Correction Mapping fue validado en tres aplicaciones de re-identificación de objetos (vehículos, caras y personas) que utilizaban como caracterísiticas la distribución de color de éstos. Por otra parte, el Camera Exposure Control se probó en un parking al aire libre. Además de esto, se definieron varios indicadores que permitieron comparar objetivamente los resultados de los métodos propuestos con otros métodos relevantes de corrección y auto exposición referidos en el estado del arte. Los resultados de la evaluación demostraron que los métodos propuestos mejoran los métodos comparados en la mayoría de las situaciones. Basándose en los resultados obtenidos, se puede decir que las respuestas a las preguntas de investigación planteadas son afirmativas, aunque en circunstancias limitadas. Esto quiere decir que, las hipótesis planteadas respecto a la predicción, la corrección basada en ésta y la auto exposición, son factibles en aquellas situaciones identificadas a lo largo de la tesis pero que, sin embargo, no se puede garantizar que se cumplan de manera general. Por otra parte, se señalan como trabajo de investigación futuro algunas cuestiones nuevas y retos científicos que aparecen a partir del trabajo presentado en esta tesis. ABSTRACT This thesis discusses the correction methods used to compensate the variation of lighting conditions in colour image and video applications. These variations are such that Computer Vision algorithms that use colour features to describe objects mostly fail. Three research questions are formulated that define the framework of the thesis. The first question addresses the similarities of the photometric behaviour between images of dissimilar adjacent surfaces. Based on the analysis of the image formation model in dynamic situations, this thesis proposes a model that predicts the colour variations of the region of an image from the variations of the surrounded regions. This proposed model is called the Quotient Relational Model of Regions. This model is valid when the light sources illuminate all of the surfaces included in the model; these surfaces are placed close each other, have similar orientations, and are primarily Lambertian. Under certain circumstances, a linear combination is established between the photometric responses of the regions. Previous work that proposed such a relational model was not found in the scientific literature. The second question examines whether those similarities could be used to correct the unknown photometric variations in an unknown region from the known adjacent regions. A method is proposed, called Linear Correction Mapping, which is capable of providing an affirmative answer under the circumstances previously characterised. A training stage is required to determine the parameters of the model. The method for single camera scenarios is extended to cover non-overlapping multi-camera architectures. To this extent, only several image samples of the same object acquired by all of the cameras are required. Furthermore, both the light variations and the changes in the camera exposure settings are covered by correction mapping. Every image correction method is unsuccessful when the image of the object to be corrected is overexposed or the signal-to-noise ratio is very low. Thus, the third question refers to the control of the acquisition process to obtain an optimal exposure in uncontrolled light conditions. A Camera Exposure Control method is proposed that is capable of holding a suitable exposure provided that the light variations can be collected within the dynamic range of the camera. Each one of the proposed methods was evaluated individually. The methodology of the experiments consisted of first selecting some scenarios that cover the representative situations for which the methods are theoretically valid. Linear Correction Mapping was validated using three object re-identification applications (vehicles, faces and persons) based on the object colour distributions. Camera Exposure Control was proved in an outdoor parking scenario. In addition, several performance indicators were defined to objectively compare the results with other relevant state of the art correction and auto-exposure methods. The results of the evaluation demonstrated that the proposed methods outperform the compared ones in the most situations. Based on the obtained results, the answers to the above-described research questions are affirmative in limited circumstances, that is, the hypothesis of the forecasting, the correction based on it, and the auto exposure are feasible in the situations identified in the thesis, although they cannot be guaranteed in general. Furthermore, the presented work raises new questions and scientific challenges, which are highlighted as future research work.
Resumo:
El sector ferroviario ha experimentado en los últimos años un empuje espectacular acaparando las mayores inversiones en construcción de nuevas líneas de alta velocidad. Junto a esta inversión inicial no se debe perder de vista el coste de mantenimiento y gestión de las mismas y para ello es necesario avanzar en el conocimiento de los fenómenos de interacción de la vía y el material móvil. En los nuevos trazados ferroviarios, que hacen del ferrocarril un modo de transporte competitivo, se produce un notable aumento en la velocidad directamente relacionado con la disminución de los tiempos de viaje, provocando por ello elevados esfuerzos dinámicos, lo que exige una elevada calidad de vía para evitar el rápido deterioro de la infraestructura. Resulta primordial controlar y minimizar los costes de mantenimiento que vienen generados por las operaciones de conservación de los parámetros de calidad y seguridad de la vía férrea. Para reducir las cargas dinámicas que actúan sobre la vía deteriorando el estado de la misma, debido a este aumento progresivo de las velocidades, es necesario reducir la rigidez vertical de la vía, pero igualmente este aumento de velocidades hace necesarias elevadas resistecias del emparrillado de vía y mejoras en las plataformas, por lo que es necesario buscar este punto de equilibrio en la elasticidad de la vía y sus componentes. Se analizan las aceleraciones verticales medidas en caja de grasa, identificando la rigidez vertical de la vía a partir de las frecuencias de vibración vertical de las masas no suspendidas, correlacionándola con la infraestructura. Estas aceleraciones verticales se desprenden de dos campañas de medidas llevadas a cabo en la zona de estudio. En estas campañas se colocaron varios acelerómetros en caja de grasa obteniendo un registro de aceleraciones verticales a partir de las cuales se ha obteniendo la variación de la rigidez de vía de unas zonas a otras. Se analiza la rigidez de la vía correlacionándola con las distintas tipologías de vía y viendo la variación del valor de la rigidez a lo largo del trazado ferroviario. Estos cambios se manifiestan cuando se producen cambios en la infraestructura, de obras de tierra a obras de fábrica, ya sean viaductos o túneles. El objeto principal de este trabajo es profundizar en estos cambios de rigidez vertical que se producen, analizando su origen y las causas que los provocan, modelizando el comportamiento de los mismos, para desarrollar metodologías de análisis en cuanto al diseño de la infraestructura. Igualmente se analizan los elementos integrantes de la misma, ahondando en las características intrínsecas de la rigidez vertical global y la rigidez de cada uno de los elementos constituyentes de la sección tipo ferroviaria, en cada una de las secciones características del tramo en estudio. Se determina en este trabajo si se produce y en qué medida, variación longitudinal de la rigidez de vía en el tramo estudiado, en cada una de las secciones características de obra de tierra y obra de fábrica seleccionadas analizando las tendencias de estos cambios y su homogeneidad a lo largo del trazado. Se establece así una nueva metodología para la determinación de la rigidez vertical de la vía a partir de las mediciones de aceleraciones verticales en caja de grasa así como el desarrollo de una aplicación en el entorno de Labview para el análisis de los registros obtenidos. During the last years the railway sector has experienced a spectacular growth, focusing investments in the construction of new high-speed lines. Apart from the first investment the cost of maintaining and managing them has to be considered and this requires more knowledge of the process of interaction between track and rolling stock vehicles. In the new high-speed lines, that make of the railway a competitive mode of transport, there is a significant increase in speed directly related to the shorten in travel time, and that produces high dynamic forces. So, this requires a high quality of the track to avoid quickly deterioration of infrastructure. It is essential to control and minimize maintenance costs generated by maintenance operations to keep the quality and safety parameters of the railway track. Due to this gradual increase of speed, and to reduce the dynamic loads acting on the railway track causing its deterioration, it is necessary to reduce the vertical stiffness of the track, but on the other hand this speed increase requires high resistance of the railway track and improvements of the railway platform, so we must find the balance between the elasticity of the track and its components. Vertical accelerations in axle box are measured and analyzed, identifying the vertical stiffness of the railway track obtained from the vertical vibration frequency of the unsprung masses, correlating with the infrastructure. These vertical accelerations are the result of two measurement campaigns carried out in the study area with the placement of several accelerometers located in the axle box. From these vertical accelerations the variation of the vertical stiffness from one area to another is obtained. The track stiffness is analysed relating with the different types of infrastructure and the change in the value of the stiffness along the railway line. These changes are revealed when changes in infrastructure occurs, for instance; earthworks to bridges or tunnels. The main purpose of this paper is to examine these vertical stiffness changes, analysing its origins and causes, modelling their behaviour, developing analytical methodologies for the design of infrastructure. In this thesis it is also reviewed the different elements of the superstructure, paying special attention to the vertical stiffness of each one. In this study is determined, if it happens and to what extent, the longitudinal variation in the stiffness of track along the railway line studied in every selected section; earthwork, bridges and tunnels. They are also analyzed trends of these changes and homogeneity along the path. This establishes a new method for determining the vertical stiffness of the railway track from the vertical accelerations measured on axle box as well as an application developed in LabView to analyze the recordings obtained.
Resumo:
The Hall Effect Thruster (HET) is a type of satellite electric propulsion device initially developed in the 1960’s independently by USA and the former USSR. The development continued in the shadow during the 1970’s in the Soviet Union to reach a mature status from the technological point of view in the 1980’s. In the 1990’s the advanced state of this Russian technology became known in western countries, which rapidly restarted the analysis and development of modern Hall thrusters. Currently, there are several companies in USA, Russia and Europe manufacturing Hall thrusters for operational use. The main applications of these thrusters are low-thrust propulsion of interplanetary probes, orbital raising of satellites and stationkeeping of geostationary satellites. However, despite the well proven in-flight experience, the physics of the Hall Thruster are not completely understood yet. Over the last two decades large efforts have been dedicated to the understanding of the physics of Hall Effect thrusters. However, the so-called anomalous diffusion, short name for an excessive electron conductivity along the thruster, is not yet fully understood as it cannot be explained with classical collisional theories. One commonly accepted explanation is the existence of azimuthal oscillations with correlated plasma density and electric field fluctuations. In fact, there is experimental evidence of the presence of an azimuthal oscillation in the low frequency range (a few kHz). This oscillation, usually called spoke, was first detected empirically by Janes and Lowder in the 1960s. More recently several experiments have shown the existence of this type of oscillation in various modern Hall thrusters. Given the frequency range, it is likely that the ionization is the cause of the spoke oscillation, like for the breathing mode oscillation. In the high frequency range (a few MHz), electron-drift azimuthal oscillations have been detected in recent experiments, in line with the oscillations measured by Esipchuk and Tilinin in the 1970’s. Even though these low and high frequency azimuthal oscillations have been known for quite some time already, the physics behind them are not yet clear and their possible relation with the anomalous diffusion process remains an unknown. This work aims at analysing from a theoretical point of view and via computer simulations the possible relation between the azimuthal oscillations and the anomalous electron transport in HET. In order to achieve this main objective, two approaches are considered: local linear stability analyses and global linear stability analyses. The use of local linear stability analyses shall allow identifying the dominant terms in the promotion of the oscillations. However, these analyses do not take into account properly the axial variation of the plasma properties along the thruster. On the other hand, global linear stability analyses do account for these axial variations and shall allow determining how the azimuthal oscillations are promoted and their possible relation with the electron transport.
Resumo:
Los sistemas de imagen por ultrasonidos son hoy una herramienta indispensable en aplicaciones de diagnóstico en medicina y son cada vez más utilizados en aplicaciones industriales en el área de ensayos no destructivos. El array es el elemento primario de estos sistemas y su diseño determina las características de los haces que se pueden construir (forma y tamaño del lóbulo principal, de los lóbulos secundarios y de rejilla, etc.), condicionando la calidad de las imágenes que pueden conseguirse. En arrays regulares la distancia máxima entre elementos se establece en media longitud de onda para evitar la formación de artefactos. Al mismo tiempo, la resolución en la imagen de los objetos presentes en la escena aumenta con el tamaño total de la apertura, por lo que una pequeña mejora en la calidad de la imagen se traduce en un aumento significativo del número de elementos del transductor. Esto tiene, entre otras, las siguientes consecuencias: Problemas de fabricación de los arrays por la gran densidad de conexiones (téngase en cuenta que en aplicaciones típicas de imagen médica, el valor de la longitud de onda es de décimas de milímetro) Baja relación señal/ruido y, en consecuencia, bajo rango dinámico de las señales por el reducido tamaño de los elementos. Complejidad de los equipos que deben manejar un elevado número de canales independientes. Por ejemplo, se necesitarían 10.000 elementos separados λ 2 para una apertura cuadrada de 50 λ. Una forma sencilla para resolver estos problemas existen alternativas que reducen el número de elementos activos de un array pleno, sacrificando hasta cierto punto la calidad de imagen, la energía emitida, el rango dinámico, el contraste, etc. Nosotros planteamos una estrategia diferente, y es desarrollar una metodología de optimización capaz de hallar de forma sistemática configuraciones de arrays de ultrasonido adaptados a aplicaciones específicas. Para realizar dicha labor proponemos el uso de los algoritmos evolutivos para buscar y seleccionar en el espacio de configuraciones de arrays aquellas que mejor se adaptan a los requisitos fijados por cada aplicación. En la memoria se trata el problema de la codificación de las configuraciones de arrays para que puedan ser utilizados como individuos de la población sobre la que van a actuar los algoritmos evolutivos. También se aborda la definición de funciones de idoneidad que permitan realizar comparaciones entre dichas configuraciones de acuerdo con los requisitos y restricciones de cada problema de diseño. Finalmente, se propone emplear el algoritmo multiobjetivo NSGA II como herramienta primaria de optimización y, a continuación, utilizar algoritmos mono-objetivo tipo Simulated Annealing para seleccionar y retinar las soluciones proporcionadas por el NSGA II. Muchas de las funciones de idoneidad que definen las características deseadas del array a diseñar se calculan partir de uno o más patrones de radiación generados por cada solución candidata. La obtención de estos patrones con los métodos habituales de simulación de campo acústico en banda ancha requiere tiempos de cálculo muy grandes que pueden hacer inviable el proceso de optimización con algoritmos evolutivos en la práctica. Como solución, se propone un método de cálculo en banda estrecha que reduce en, al menos, un orden de magnitud el tiempo de cálculo necesario Finalmente se presentan una serie de ejemplos, con arrays lineales y bidimensionales, para validar la metodología de diseño propuesta comparando experimentalmente las características reales de los diseños construidos con las predicciones del método de optimización. ABSTRACT Currently, the ultrasound imaging system is one of the powerful tools in medical diagnostic and non-destructive testing for industrial applications. Ultrasonic arrays design determines the beam characteristics (main and secondary lobes, beam pattern, etc...) which assist to enhance the image resolution. The maximum distance between the elements of the array should be the half of the wavelength to avoid the formation of grating lobes. At the same time, the image resolution of the target in the region of interest increases with the aperture size. Consequently, the larger number of elements in arrays assures the better image quality but this improvement contains the following drawbacks: Difficulties in the arrays manufacturing due to the large connection density. Low noise to signal ratio. Complexity of the ultrasonic system to handle large number of channels. The easiest way to resolve these issues is to reduce the number of active elements in full arrays, but on the other hand the image quality, dynamic range, contrast, etc, are compromised by this solutions In this thesis, an optimization methodology able to find ultrasound array configurations adapted for specific applications is presented. The evolutionary algorithms are used to obtain the ideal arrays among the existing configurations. This work addressed problems such as: the codification of ultrasound arrays to be interpreted as individuals in the evolutionary algorithm population and the fitness function and constraints, which will assess the behaviour of individuals. Therefore, it is proposed to use the multi-objective algorithm NSGA-II as a primary optimization tool, and then use the mono-objective Simulated Annealing algorithm to select and refine the solutions provided by the NSGA I I . The acoustic field is calculated many times for each individual and in every generation for every fitness functions. An acoustic narrow band field simulator, where the number of operations is reduced, this ensures a quick calculation of the acoustic field to reduce the expensive computing time required by these functions we have employed. Finally a set of examples are presented in order to validate our proposed design methodology, using linear and bidimensional arrays where the actual characteristics of the design are compared with the predictions of the optimization methodology.
Resumo:
Para el análisis de la respuesta estructural, tradicionalmente se ha partido de la suposición de que los nudos son totalmente rígidos o articulados. Este criterio facilita en gran medida los cálculos, pero no deja de ser una idealización del comportamiento real de las uniones. Como es lógico, entre los nudos totalmente rígidos y los nudos articulados existe una gama infinita de valores de rigidez que podrían ser adoptados. A las uniones que presentan un valor distinto de los canónicos se les denomina en sentido general uniones semirrígidas. La consideración de esta rigidez intermedia complica considerablemente los cálculos estructurales, no obstante provoca cambios en el reparto de esfuerzos dentro de la estructura, así como en la configuración de las propias uniones, que en ciertas circunstancias pueden suponer una ventaja económica. Del planteamiento expuesto en el párrafo anterior, surgen dos cuestiones que serán el germen de la tesis. Estas son: ¿Qué ocurre si se aplica el concepto de uniones semirrígidas a las naves industriales? ¿Existen unos valores determinados de rigidez en sus nudos con los que se logra optimizar la estructura? Así, surge el objetivo principal de la tesis, que no es otro que conocer la influencia de la rigidez de los nudos en los costes de los pórticos a dos aguas de estructura metálica utilizados típicamente en edificios de uso agroindustrial. Para alcanzar el objetivo propuesto, se plantea una metodología de trabajo que básicamente consiste en el estudio de una muestra representativa de pórticos sometidos a tres estados de carga: bajo, medio y alto. Su rango de luces abarca desde los 8 a los 20 m y el de la altura de pilares desde los 3,5 a los 10 m. Además, se considera que sus uniones pueden adoptar valores intermedios de rigidez. De la combinatoria de las diferentes configuraciones posibles se obtienen 46.656 casos que serán objeto de estudio. Debido al fin economicista del trabajo, se ha prestado especial atención a la obtención de los costes de ejecución de las diferentes partidas que componen la estructura, incluidas las correspondientes a las uniones. Para acometer los cálculos estructurales ha sido imprescindible contar con un soporte informático, tanto existente como de creación ex profeso, que permitiese su automatización en un contexto de optimización. Los resultados del estudio consisten básicamente en una cantidad importante de datos que para su interpretación se hace imprescindible tratarlos previamente. Este tratamiento se fundamenta en su ordenación sistemática, en la aplicación de técnicas estadísticas y en la representación gráfica. Con esto se obtiene un catálogo de gráficos en los que se representa el coste total de la estructura según los diferentes valores de rigidez de sus nudos, unas matrices resumen de resultados y unos modelos matemáticos que representan la función coste total - rigideces de nudos. Como conclusiones se puede destacar: por un lado que los costes totales de los pórticos estudiados son mínimos cuando los valores de rigidez de sus uniones son bajos, concretamente de 5•10³ a 10•10³ kN•m/rad; y por otro que la utilización en estas estructuras de uniones semirrígidas con una combinación idónea de sus rigideces, supone una ventaja económica media del 18% con respecto a las dos tipologías que normalmente se usan en este tipo de edificaciones como son los pórticos biempotrados y los biarticulados. ABSTRACT Analyzing for structural response, traditionally it started from the assumption that joints are fully rigid or pinned. This criterion makes the design significantly easier, but it is also an idealization of real joint behaviour. As is to be expected, there is an almost endless range of stiffnes value between fully rigid and pinned joints, wich could be adopted. Joints with a value other than traditional are referred to generally as semi-rigid joints. If middle stiffness is considered, the structural design becomes much more complicated, however, it causes changes in the distribution of frame stresses, as well as on joints configuration, that under certain circumstances they may suppose an economic advantage. Two questions arise from the approach outlined in the preceding subparagraph, wich are the seeds of the doctoral thesis. These are: what happens when the concept of semirigid joints is applied to industrial buildings? There are certain stiffness values in their joints with which optimization of frame is achieved? This way, the main objective of the thesis arise, which is to know the influence of stiffness of joints in the cost of the steel frames with gabled roof, that they are typically used in industrial buildings. In order to achieve the proposed goal, a work methodology is proposed, which consists in essence in a study of a representative sample of frames under three load conditions: low, middle and high. Their range of spans comprises from 8 to 20 m and range of the height of columns cover from 3,5 to 10 m. Furthermore, it is considered that their joints can adopt intermediate values of stiffness. The result of the combination of different configurations options is 46.656 cases, which will be subject of study. Due to the economic aim of this work, a particular focus has been devoted for obtaining the execution cost of the different budget items that make up the structure, including those relating to joints. In order to do the structural calculations, count with a computing support has been indispensable, existing and created expressly for this purpose, which would allows its automation in a optimization context. The results of the study basically consist in a important amount of data, whose previous processing is necesary. This process is based on his systematic arrangement, in implementation of statistical techniques and in graphical representation. This give a catalogue of graphics, which depicts the whole cost of structure according to the different stiffness of its joints, a matrixes with the summary of results and a mathematical models which represent the function whole cost - stiffness of the joints. The most remarkable conclusions are: whole costs of the frames studied are minimum when the stiffness values of their joints are low, specifically 5•10³ a 10•10³ kN•m/rad; and the use of structures with semi-rigid joints and a suitable combination of their stiffness implyes an average economic advantage of 18% over other two typologies which are used typically in this kind of buildings; these are the rigid and bi-articulated frames.
Resumo:
Nowadays, a lot of applications use digital images. For example in face recognition to detect and tag persons in photograph, for security control, and a lot of applications that can be found in smart cities, as speed control in roads or highways and cameras in traffic lights to detect drivers ignoring red light. Also in medicine digital images are used, such as x-ray, scanners, etc. These applications depend on the quality of the image obtained. A good camera is expensive, and the image obtained depends also on external factor as light. To make these applications work properly, image enhancement is as important as, for example, a good face detection algorithm. Image enhancement also can be used in normal photograph, for pictures done in bad light conditions, or just to improve the contrast of an image. There are some applications for smartphones that allow users apply filters or change the bright, colour or contrast on the pictures. This project compares four different techniques to use in image enhancement. After applying one of these techniques to an image, it will use better the whole available dynamic range. Some of the algorithms are designed for grey scale images and others for colour images. It is used Matlab software to develop and present the final results. These algorithms are Successive Means Quantization Transform (SMQT), Histogram Equalization, using Matlab function and own implemented function, and V transform. Finally, as conclusions, we can prove that Histogram equalization algorithm is the simplest of all, it has a wide variability of grey levels and it is not suitable for colour images. V transform algorithm is a good option for colour images. The algorithm is linear and requires low computational power. SMQT algorithm is non-linear, insensitive to gain and bias and it can extract structure of the data. RESUMEN. Hoy en día incontable número de aplicaciones usan imágenes digitales. Por ejemplo, para el control de la seguridad se usa el reconocimiento de rostros para detectar y etiquetar personas en fotografías o vídeos, para distintos usos de las ciudades inteligentes, como control de velocidad en carreteras o autopistas, cámaras en los semáforos para detectar a conductores haciendo caso omiso de un semáforo en rojo, etc. También en la medicina se utilizan imágenes digitales, como por ejemplo, rayos X, escáneres, etc. Todas estas aplicaciones dependen de la calidad de la imagen obtenida. Una buena cámara es cara, y la imagen obtenida depende también de factores externos como la luz. Para hacer que estas aplicaciones funciones correctamente, el tratamiento de imagen es tan importante como, por ejemplo, un buen algoritmo de detección de rostros. La mejora de la imagen también se puede utilizar en la fotografía no profesional o de consumo, para las fotos realizadas en malas condiciones de luz, o simplemente para mejorar el contraste de una imagen. Existen aplicaciones para teléfonos móviles que permiten a los usuarios aplicar filtros y cambiar el brillo, el color o el contraste en las imágenes. Este proyecto compara cuatro técnicas diferentes para utilizar el tratamiento de imagen. Se utiliza la herramienta de software matemático Matlab para desarrollar y presentar los resultados finales. Estos algoritmos son Successive Means Quantization Transform (SMQT), Ecualización del histograma, usando la propia función de Matlab y una nueva función que se desarrolla en este proyecto y, por último, una función de transformada V. Finalmente, como conclusión, podemos comprobar que el algoritmo de Ecualización del histograma es el más simple de todos, tiene una amplia variabilidad de niveles de gris y no es adecuado para imágenes en color. El algoritmo de transformada V es una buena opción para imágenes en color, es lineal y requiere baja potencia de cálculo. El algoritmo SMQT no es lineal, insensible a la ganancia y polarización y, gracias a él, se puede extraer la estructura de los datos.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Culverts are very common in recent railway lines. Wild life corridors and drainage conducts often fall in this category of partially buried structures. Their dynamic behavior has received far less attention than other structures such as bridges but its large number makes that study an interesting challenge from the point of view of safety and savings. In this paper a complete study of a culvert, including on-site measurements as well as numerical modelling, will be presented. The structure belongs to the high speed railway line linking Segovia and Valladolid, in Spain. The line was opened to traffic in 2004. Its dimensions (3x3m) are the most frequent along the line. Other factors such as reduced overburden (0.6m) and an almost right angle with the track axis make it an interesting example to extract generalized conclusions. On site measurements have been performed in the structure recording the dynamic response at selected points of the structure during the passage of high speed trains at speeds ranging between 200 and 300km/h. The measurements by themselves provide a good insight into the main features of the dynamic behaviour of the structure. A 3D finite element model of the structure, representing its key features was also studied as it allows further understanding of the dynamic response to the train loads . In the paper the discrepancies between predicted and measured vibration levels will be analyzed and some advices on numerical modelling will be proposed
Resumo:
The traditional ballast track structures are still being used in high speed railways lines with success, however technical problems or performance features have led to non-ballast track solution in some cases. A considerable maintenance work is needed for ballasted tracks due to the track deterioration. Therefore it is very important to understand the mechanism of track deterioration and to predict the track settlement or track irregularity growth rate in order to reduce track maintenance costs and enable new track structures to be designed. The objective of this work is to develop the most adequate and efficient models for calculation of dynamic traffic load effects on railways track infrastructure, and then evaluate the dynamic effect on the ballast track settlement, using a ballast track settlement prediction model, which consists of the vehicle/track dynamic model previously selected and a track settlement law. The calculations are based on dynamic finite element models with direct time integration, contact between wheel and rail and interaction with railway cars. A initial irregularity profile is used in the prediction model. The track settlement law is considered to be a function of number of loading cycles and the magnitude of the loading, which represents the long-term behavior of ballast settlement. The results obtained include the track irregularity growth and the contact force in the final interaction of numerical simulation
Resumo:
This paper reports the studies carried out to develop and calibrate the optimal models for the objectives of this work. In particular, quarter bogie model for vehicle, rail-wheel contact with Lagrangian multiplier method, 2D spatial discretization were selected as the optimal decisions. Furthermore, the 3D model of coupled vehicle-track also has been developed to contrast the results obtained in the 2D model. The calculations were carried out in the time domain and envelopes of relevant results were obtained for several track profiles and speed ranges. Distributed elevation irregularities were generated based on power spectral density (PSD) distributions. The results obtained include the wheel-rail contact forces, forces transmitted to the bogie by primary suspension. The latter loads are relevant for the purpose of evaluating the performance of the infrastructure
Resumo:
In this work a methodology for analysing the lateral coupled behavior of large viaducts and high-speed trains is proposed. The finite element method is used for the structure, multibody techniques are applied for vehicles and the interaction between them is established introducing wheel-rail nonlinear contact forces. This methodology is applied for the analysis of the railway viaduct of the R´ıo Barbantino, which is a very long and tall bridge in the north-west spanish high-speed line.
Resumo:
Underpasses are common in modern railway lines. Wildlife corridors and drainage conduits often fall into this category of partially buried structures. Their dynamic behavior has received far less attention than that of other structures such as bridges, but their large number makes their study an interesting challenge from the viewpoint of safety and cost savings. Here, we present a complete study of a culvert, including on-site measurements and numerical modeling. The studied structure belongs to the high-speed railway line linking Segovia and Valladolid in Spain. The line was opened to traffic in 2004. On-site measurements were performed for the structure by recording the dynamic response at selected points of the structure during the passage of high-speed trains at speeds ranging between 200 and 300 km/h. The measurements provide not only reference values suitable for model fitting, but also a good insight into the main features of the dynamic behavior of this structure. Finite element techniques were used to model the dynamic behavior of the structure and its key features. Special attention is paid to vertical accelerations, the values of which should be limited to avoid track instability according to Eurocode. This study furthers our understanding of the dynamic response of railway underpasses to train loads.
Resumo:
In this paper, an AlN/free-standing nanocrystalline diamond (NCD) system is proposed in order to process high frequency surface acoustic wave (SAW) resonators for sensing applications. The main problem of synthetic diamond is its high surface roughness that worsens the sputtered AlN quality and hence the device response. In order to study the feasibility of this structure, AlN films from 150 nm up to 1200 nm thick have been deposited on free-standing NCD. We have then analysed the influence of the AlN layer thickness on its crystal quality and device response. Optimized thin films of 300 nm have been used to fabricate of one-port SAW resonators operating in the 10–14 GHz frequency range. A SAW based sensor pressure with a sensibility of 0.33 MHz/bar has been fabricated.
Resumo:
The vertical dynamic actions transmitted by railway vehicles to the ballasted track infrastructure is evaluated taking into account models with different degree of detail. In particular, we have studied this matter from a two-dimensional (2D) finite element model to a fully coupled three-dimensional (3D) multi-body finite element model. The vehicle and track are coupled via a non-linear Hertz contact mechanism. The method of Lagrange multipliers is used for the contact constraint enforcement between wheel and rail. Distributed elevation irregularities are generated based on power spectral density (PSD) distributions which are taken into account for the interaction. The numerical simulations are performed in the time domain, using a direct integration method for solving the transient problem due to the contact nonlinearities. The results obtained include contact forces, forces transmitted to the infrastructure (sleeper) by railpads and envelopes of relevant results for several track irregularities and speed ranges. The main contribution of this work is to identify and discuss coincidences and differences between discrete 2D models and continuum 3D models, as wheel as assessing the validity of evaluating the dynamic loading on the track with simplified 2D models