967 resultados para High-dynamic range images


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Collaborative efforts between the Neutronics and Target Design Group at the Instituto de Fusión Nuclear and the Molecular Spectroscopy Group at the ISIS Pulsed Neutron and Muon Source date back to 2012 in the context of the ESS-Bilbao project. The rationale for these joint activities was twofold, namely: to assess the realm of applicability of the low-energy neutron source proposed by ESS-Bilbao - for details; and to explore instrument capabilities for pulsed-neutron techniques in the range 0.05-3 ms, a time range where ESS-Bilbao and ISIS could offer a significant degree of synergy and complementarity. As part of this collaboration, J.P. de Vicente has spent a three-month period within the ISIS Molecular Spectroscopy Group, to gain hands-on experience on the practical aspects of neutron-instrument design and the requisite neutron-transport simulations. To date, these activities have resulted in a joint MEng thesis as well as a number of publications and contributions to national and international conferences. Building upon these previous works, the primary aim of this report is to provide a self-contained discussion of general criteria for instrument selection at ESS-Bilbao, the first accelerator-driven, low-energy neutron source designed in Spain. To this end, Chapter 1 provides a brief overview of the current design parameters of the accelerator and target station. Neutron moderation is covered in Chapter 2, where we take a closer look at two possible target-moderator-reflector configurations and pay special attention to the spectral and temporal characteristics of the resulting neutron pulses. This discussion provides a necessary starting point to assess the operation of ESSB in short- and long-pulse modes. These considerations are further explored in Chapter 3, dealing with the primary characteristics of ESS-Bilbao as a short- or long-pulse facility in terms of accessible dynamic range and spectral resolution. Other practical aspects including background suppression and the use of fast choppers are also discussed. The guiding principles introduced in the first three chapters are put to use in Chapter 4 where we analyse in some detail the capabilities of a small-angle scattering instrument, as well as how specific scientific requirements can be mapped onto the optimal use of ESS-Bilbao for condensed-matter research. Part 2 of the report contains additional supporting documentation, including a description of the ESSB McStas component, a detailed characterisation of moderator response and neutron pulses, and estimates ofparameters associated with the design and operation of neutron choppers. In closing this brief foreword, we wish to thank both ESS-Bilbao and ISIS for their continuing encouragement and support along the way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of aerial imagery, one of the first steps toward a coherent processing of the information contained in multiple images is geo-registration, which consists in assigning geographic 3D coordinates to the pixels of the image. This enables accurate alignment and geo-positioning of multiple images, detection of moving objects and fusion of data acquired from multiple sensors. To solve this problem there are different approaches that require, in addition to a precise characterization of the camera sensor, high resolution referenced images or terrain elevation models, which are usually not publicly available or out of date. Building upon the idea of developing technology that does not need a reference terrain elevation model, we propose a geo-registration technique that applies variational methods to obtain a dense and coherent surface elevation model that is used to replace the reference model. The surface elevation model is built by interpolation of scattered 3D points, which are obtained in a two-step process following a classical stereo pipeline: first, coherent disparity maps between image pairs of a video sequence are estimated and then image point correspondences are back-projected. The proposed variational method enforces continuity of the disparity map not only along epipolar lines (as done by previous geo-registration techniques) but also across them, in the full 2D image domain. In the experiments, aerial images from synthetic video sequences have been used to validate the proposed technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remote sensing information from spaceborne and airborne platforms continues to provide valuable data for different environmental monitoring applications. In this sense, high spatial resolution im-agery is an important source of information for land cover mapping. For the processing of high spa-tial resolution images, the object-based methodology is one of the most commonly used strategies. However, conventional pixel-based methods, which only use spectral information for land cover classification, are inadequate for classifying this type of images. This research presents a method-ology to characterise Mediterranean land covers in high resolution aerial images by means of an object-oriented approach. It uses a self-calibrating multi-band region growing approach optimised by pre-processing the image with a bilateral filtering. The obtained results show promise in terms of both segmentation quality and computational efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El sector ferroviario ha experimentado en los últimos años un empuje espectacular acaparando las mayores inversiones en construcción de nuevas líneas de alta velocidad. Junto a esta inversión inicial no se debe perder de vista el coste de mantenimiento y gestión de las mismas y para ello es necesario avanzar en el conocimiento de los fenómenos de interacción de la vía y el material móvil. En los nuevos trazados ferroviarios, que hacen del ferrocarril un modo de transporte competitivo, se produce un notable aumento en la velocidad directamente relacionado con la disminución de los tiempos de viaje, provocando por ello elevados esfuerzos dinámicos, lo que exige una elevada calidad de vía para evitar el rápido deterioro de la infraestructura. Resulta primordial controlar y minimizar los costes de mantenimiento que vienen generados por las operaciones de conservación de los parámetros de calidad y seguridad de la vía férrea. Para reducir las cargas dinámicas que actúan sobre la vía deteriorando el estado de la misma, debido a este aumento progresivo de las velocidades, es necesario reducir la rigidez vertical de la vía, pero igualmente este aumento de velocidades hace necesarias elevadas resistecias del emparrillado de vía y mejoras en las plataformas, por lo que es necesario buscar este punto de equilibrio en la elasticidad de la vía y sus componentes. Se analizan las aceleraciones verticales medidas en caja de grasa, identificando la rigidez vertical de la vía a partir de las frecuencias de vibración vertical de las masas no suspendidas, correlacionándola con la infraestructura. Estas aceleraciones verticales se desprenden de dos campañas de medidas llevadas a cabo en la zona de estudio. En estas campañas se colocaron varios acelerómetros en caja de grasa obteniendo un registro de aceleraciones verticales a partir de las cuales se ha obteniendo la variación de la rigidez de vía de unas zonas a otras. Se analiza la rigidez de la vía correlacionándola con las distintas tipologías de vía y viendo la variación del valor de la rigidez a lo largo del trazado ferroviario. Estos cambios se manifiestan cuando se producen cambios en la infraestructura, de obras de tierra a obras de fábrica, ya sean viaductos o túneles. El objeto principal de este trabajo es profundizar en estos cambios de rigidez vertical que se producen, analizando su origen y las causas que los provocan, modelizando el comportamiento de los mismos, para desarrollar metodologías de análisis en cuanto al diseño de la infraestructura. Igualmente se analizan los elementos integrantes de la misma, ahondando en las características intrínsecas de la rigidez vertical global y la rigidez de cada uno de los elementos constituyentes de la sección tipo ferroviaria, en cada una de las secciones características del tramo en estudio. Se determina en este trabajo si se produce y en qué medida, variación longitudinal de la rigidez de vía en el tramo estudiado, en cada una de las secciones características de obra de tierra y obra de fábrica seleccionadas analizando las tendencias de estos cambios y su homogeneidad a lo largo del trazado. Se establece así una nueva metodología para la determinación de la rigidez vertical de la vía a partir de las mediciones de aceleraciones verticales en caja de grasa así como el desarrollo de una aplicación en el entorno de Labview para el análisis de los registros obtenidos. During the last years the railway sector has experienced a spectacular growth, focusing investments in the construction of new high-speed lines. Apart from the first investment the cost of maintaining and managing them has to be considered and this requires more knowledge of the process of interaction between track and rolling stock vehicles. In the new high-speed lines, that make of the railway a competitive mode of transport, there is a significant increase in speed directly related to the shorten in travel time, and that produces high dynamic forces. So, this requires a high quality of the track to avoid quickly deterioration of infrastructure. It is essential to control and minimize maintenance costs generated by maintenance operations to keep the quality and safety parameters of the railway track. Due to this gradual increase of speed, and to reduce the dynamic loads acting on the railway track causing its deterioration, it is necessary to reduce the vertical stiffness of the track, but on the other hand this speed increase requires high resistance of the railway track and improvements of the railway platform, so we must find the balance between the elasticity of the track and its components. Vertical accelerations in axle box are measured and analyzed, identifying the vertical stiffness of the railway track obtained from the vertical vibration frequency of the unsprung masses, correlating with the infrastructure. These vertical accelerations are the result of two measurement campaigns carried out in the study area with the placement of several accelerometers located in the axle box. From these vertical accelerations the variation of the vertical stiffness from one area to another is obtained. The track stiffness is analysed relating with the different types of infrastructure and the change in the value of the stiffness along the railway line. These changes are revealed when changes in infrastructure occurs, for instance; earthworks to bridges or tunnels. The main purpose of this paper is to examine these vertical stiffness changes, analysing its origins and causes, modelling their behaviour, developing analytical methodologies for the design of infrastructure. In this thesis it is also reviewed the different elements of the superstructure, paying special attention to the vertical stiffness of each one. In this study is determined, if it happens and to what extent, the longitudinal variation in the stiffness of track along the railway line studied in every selected section; earthwork, bridges and tunnels. They are also analyzed trends of these changes and homogeneity along the path. This establishes a new method for determining the vertical stiffness of the railway track from the vertical accelerations measured on axle box as well as an application developed in LabView to analyze the recordings obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Hall Effect Thruster (HET) is a type of satellite electric propulsion device initially developed in the 1960’s independently by USA and the former USSR. The development continued in the shadow during the 1970’s in the Soviet Union to reach a mature status from the technological point of view in the 1980’s. In the 1990’s the advanced state of this Russian technology became known in western countries, which rapidly restarted the analysis and development of modern Hall thrusters. Currently, there are several companies in USA, Russia and Europe manufacturing Hall thrusters for operational use. The main applications of these thrusters are low-thrust propulsion of interplanetary probes, orbital raising of satellites and stationkeeping of geostationary satellites. However, despite the well proven in-flight experience, the physics of the Hall Thruster are not completely understood yet. Over the last two decades large efforts have been dedicated to the understanding of the physics of Hall Effect thrusters. However, the so-called anomalous diffusion, short name for an excessive electron conductivity along the thruster, is not yet fully understood as it cannot be explained with classical collisional theories. One commonly accepted explanation is the existence of azimuthal oscillations with correlated plasma density and electric field fluctuations. In fact, there is experimental evidence of the presence of an azimuthal oscillation in the low frequency range (a few kHz). This oscillation, usually called spoke, was first detected empirically by Janes and Lowder in the 1960s. More recently several experiments have shown the existence of this type of oscillation in various modern Hall thrusters. Given the frequency range, it is likely that the ionization is the cause of the spoke oscillation, like for the breathing mode oscillation. In the high frequency range (a few MHz), electron-drift azimuthal oscillations have been detected in recent experiments, in line with the oscillations measured by Esipchuk and Tilinin in the 1970’s. Even though these low and high frequency azimuthal oscillations have been known for quite some time already, the physics behind them are not yet clear and their possible relation with the anomalous diffusion process remains an unknown. This work aims at analysing from a theoretical point of view and via computer simulations the possible relation between the azimuthal oscillations and the anomalous electron transport in HET. In order to achieve this main objective, two approaches are considered: local linear stability analyses and global linear stability analyses. The use of local linear stability analyses shall allow identifying the dominant terms in the promotion of the oscillations. However, these analyses do not take into account properly the axial variation of the plasma properties along the thruster. On the other hand, global linear stability analyses do account for these axial variations and shall allow determining how the azimuthal oscillations are promoted and their possible relation with the electron transport.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los sistemas de imagen por ultrasonidos son hoy una herramienta indispensable en aplicaciones de diagnóstico en medicina y son cada vez más utilizados en aplicaciones industriales en el área de ensayos no destructivos. El array es el elemento primario de estos sistemas y su diseño determina las características de los haces que se pueden construir (forma y tamaño del lóbulo principal, de los lóbulos secundarios y de rejilla, etc.), condicionando la calidad de las imágenes que pueden conseguirse. En arrays regulares la distancia máxima entre elementos se establece en media longitud de onda para evitar la formación de artefactos. Al mismo tiempo, la resolución en la imagen de los objetos presentes en la escena aumenta con el tamaño total de la apertura, por lo que una pequeña mejora en la calidad de la imagen se traduce en un aumento significativo del número de elementos del transductor. Esto tiene, entre otras, las siguientes consecuencias: Problemas de fabricación de los arrays por la gran densidad de conexiones (téngase en cuenta que en aplicaciones típicas de imagen médica, el valor de la longitud de onda es de décimas de milímetro) Baja relación señal/ruido y, en consecuencia, bajo rango dinámico de las señales por el reducido tamaño de los elementos. Complejidad de los equipos que deben manejar un elevado número de canales independientes. Por ejemplo, se necesitarían 10.000 elementos separados λ 2 para una apertura cuadrada de 50 λ. Una forma sencilla para resolver estos problemas existen alternativas que reducen el número de elementos activos de un array pleno, sacrificando hasta cierto punto la calidad de imagen, la energía emitida, el rango dinámico, el contraste, etc. Nosotros planteamos una estrategia diferente, y es desarrollar una metodología de optimización capaz de hallar de forma sistemática configuraciones de arrays de ultrasonido adaptados a aplicaciones específicas. Para realizar dicha labor proponemos el uso de los algoritmos evolutivos para buscar y seleccionar en el espacio de configuraciones de arrays aquellas que mejor se adaptan a los requisitos fijados por cada aplicación. En la memoria se trata el problema de la codificación de las configuraciones de arrays para que puedan ser utilizados como individuos de la población sobre la que van a actuar los algoritmos evolutivos. También se aborda la definición de funciones de idoneidad que permitan realizar comparaciones entre dichas configuraciones de acuerdo con los requisitos y restricciones de cada problema de diseño. Finalmente, se propone emplear el algoritmo multiobjetivo NSGA II como herramienta primaria de optimización y, a continuación, utilizar algoritmos mono-objetivo tipo Simulated Annealing para seleccionar y retinar las soluciones proporcionadas por el NSGA II. Muchas de las funciones de idoneidad que definen las características deseadas del array a diseñar se calculan partir de uno o más patrones de radiación generados por cada solución candidata. La obtención de estos patrones con los métodos habituales de simulación de campo acústico en banda ancha requiere tiempos de cálculo muy grandes que pueden hacer inviable el proceso de optimización con algoritmos evolutivos en la práctica. Como solución, se propone un método de cálculo en banda estrecha que reduce en, al menos, un orden de magnitud el tiempo de cálculo necesario Finalmente se presentan una serie de ejemplos, con arrays lineales y bidimensionales, para validar la metodología de diseño propuesta comparando experimentalmente las características reales de los diseños construidos con las predicciones del método de optimización. ABSTRACT Currently, the ultrasound imaging system is one of the powerful tools in medical diagnostic and non-destructive testing for industrial applications. Ultrasonic arrays design determines the beam characteristics (main and secondary lobes, beam pattern, etc...) which assist to enhance the image resolution. The maximum distance between the elements of the array should be the half of the wavelength to avoid the formation of grating lobes. At the same time, the image resolution of the target in the region of interest increases with the aperture size. Consequently, the larger number of elements in arrays assures the better image quality but this improvement contains the following drawbacks: Difficulties in the arrays manufacturing due to the large connection density. Low noise to signal ratio. Complexity of the ultrasonic system to handle large number of channels. The easiest way to resolve these issues is to reduce the number of active elements in full arrays, but on the other hand the image quality, dynamic range, contrast, etc, are compromised by this solutions In this thesis, an optimization methodology able to find ultrasound array configurations adapted for specific applications is presented. The evolutionary algorithms are used to obtain the ideal arrays among the existing configurations. This work addressed problems such as: the codification of ultrasound arrays to be interpreted as individuals in the evolutionary algorithm population and the fitness function and constraints, which will assess the behaviour of individuals. Therefore, it is proposed to use the multi-objective algorithm NSGA-II as a primary optimization tool, and then use the mono-objective Simulated Annealing algorithm to select and refine the solutions provided by the NSGA I I . The acoustic field is calculated many times for each individual and in every generation for every fitness functions. An acoustic narrow band field simulator, where the number of operations is reduced, this ensures a quick calculation of the acoustic field to reduce the expensive computing time required by these functions we have employed. Finally a set of examples are presented in order to validate our proposed design methodology, using linear and bidimensional arrays where the actual characteristics of the design are compared with the predictions of the optimization methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Para el análisis de la respuesta estructural, tradicionalmente se ha partido de la suposición de que los nudos son totalmente rígidos o articulados. Este criterio facilita en gran medida los cálculos, pero no deja de ser una idealización del comportamiento real de las uniones. Como es lógico, entre los nudos totalmente rígidos y los nudos articulados existe una gama infinita de valores de rigidez que podrían ser adoptados. A las uniones que presentan un valor distinto de los canónicos se les denomina en sentido general uniones semirrígidas. La consideración de esta rigidez intermedia complica considerablemente los cálculos estructurales, no obstante provoca cambios en el reparto de esfuerzos dentro de la estructura, así como en la configuración de las propias uniones, que en ciertas circunstancias pueden suponer una ventaja económica. Del planteamiento expuesto en el párrafo anterior, surgen dos cuestiones que serán el germen de la tesis. Estas son: ¿Qué ocurre si se aplica el concepto de uniones semirrígidas a las naves industriales? ¿Existen unos valores determinados de rigidez en sus nudos con los que se logra optimizar la estructura? Así, surge el objetivo principal de la tesis, que no es otro que conocer la influencia de la rigidez de los nudos en los costes de los pórticos a dos aguas de estructura metálica utilizados típicamente en edificios de uso agroindustrial. Para alcanzar el objetivo propuesto, se plantea una metodología de trabajo que básicamente consiste en el estudio de una muestra representativa de pórticos sometidos a tres estados de carga: bajo, medio y alto. Su rango de luces abarca desde los 8 a los 20 m y el de la altura de pilares desde los 3,5 a los 10 m. Además, se considera que sus uniones pueden adoptar valores intermedios de rigidez. De la combinatoria de las diferentes configuraciones posibles se obtienen 46.656 casos que serán objeto de estudio. Debido al fin economicista del trabajo, se ha prestado especial atención a la obtención de los costes de ejecución de las diferentes partidas que componen la estructura, incluidas las correspondientes a las uniones. Para acometer los cálculos estructurales ha sido imprescindible contar con un soporte informático, tanto existente como de creación ex profeso, que permitiese su automatización en un contexto de optimización. Los resultados del estudio consisten básicamente en una cantidad importante de datos que para su interpretación se hace imprescindible tratarlos previamente. Este tratamiento se fundamenta en su ordenación sistemática, en la aplicación de técnicas estadísticas y en la representación gráfica. Con esto se obtiene un catálogo de gráficos en los que se representa el coste total de la estructura según los diferentes valores de rigidez de sus nudos, unas matrices resumen de resultados y unos modelos matemáticos que representan la función coste total - rigideces de nudos. Como conclusiones se puede destacar: por un lado que los costes totales de los pórticos estudiados son mínimos cuando los valores de rigidez de sus uniones son bajos, concretamente de 5•10³ a 10•10³ kN•m/rad; y por otro que la utilización en estas estructuras de uniones semirrígidas con una combinación idónea de sus rigideces, supone una ventaja económica media del 18% con respecto a las dos tipologías que normalmente se usan en este tipo de edificaciones como son los pórticos biempotrados y los biarticulados. ABSTRACT Analyzing for structural response, traditionally it started from the assumption that joints are fully rigid or pinned. This criterion makes the design significantly easier, but it is also an idealization of real joint behaviour. As is to be expected, there is an almost endless range of stiffnes value between fully rigid and pinned joints, wich could be adopted. Joints with a value other than traditional are referred to generally as semi-rigid joints. If middle stiffness is considered, the structural design becomes much more complicated, however, it causes changes in the distribution of frame stresses, as well as on joints configuration, that under certain circumstances they may suppose an economic advantage. Two questions arise from the approach outlined in the preceding subparagraph, wich are the seeds of the doctoral thesis. These are: what happens when the concept of semirigid joints is applied to industrial buildings? There are certain stiffness values in their joints with which optimization of frame is achieved? This way, the main objective of the thesis arise, which is to know the influence of stiffness of joints in the cost of the steel frames with gabled roof, that they are typically used in industrial buildings. In order to achieve the proposed goal, a work methodology is proposed, which consists in essence in a study of a representative sample of frames under three load conditions: low, middle and high. Their range of spans comprises from 8 to 20 m and range of the height of columns cover from 3,5 to 10 m. Furthermore, it is considered that their joints can adopt intermediate values of stiffness. The result of the combination of different configurations options is 46.656 cases, which will be subject of study. Due to the economic aim of this work, a particular focus has been devoted for obtaining the execution cost of the different budget items that make up the structure, including those relating to joints. In order to do the structural calculations, count with a computing support has been indispensable, existing and created expressly for this purpose, which would allows its automation in a optimization context. The results of the study basically consist in a important amount of data, whose previous processing is necesary. This process is based on his systematic arrangement, in implementation of statistical techniques and in graphical representation. This give a catalogue of graphics, which depicts the whole cost of structure according to the different stiffness of its joints, a matrixes with the summary of results and a mathematical models which represent the function whole cost - stiffness of the joints. The most remarkable conclusions are: whole costs of the frames studied are minimum when the stiffness values of their joints are low, specifically 5•10³ a 10•10³ kN•m/rad; and the use of structures with semi-rigid joints and a suitable combination of their stiffness implyes an average economic advantage of 18% over other two typologies which are used typically in this kind of buildings; these are the rigid and bi-articulated frames.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los Pabellones de las Exposiciones Universales suelen considerarse dentro de las arquitecturas efímeras, pero habría que puntualizar que toda construcción tiene su tiempo y su periodo de extinción pudiendo ser éstos indefinidos, lo permanente en lo efímero. Muchas de las obras míticas del siglo XX existieron sólo durante unos meses, en escenarios efímeros, modificando el curso de la arquitectura con unas pocas imágenes, lo que llevaría a cuestionar si las circunstancias por las que no han sobrevivido o lo han hecho en circunstancias poco comunes, no se deben tanto a una condición efímera sino a su carácter experimental. Determinadas Exposiciones Universales fueron plataforma para que los pabellones, hitos con los que se ha construido una parte significativa de la Historia de la Arquitectura contemporánea, pasaran a convertirse en mitos, por su distancia en el tiempo, porque ya no existen y porque a veces de ellos sólo nos queda una anticuada y limitada imaginería. Las diversas Historias de la Arquitectura ponen de manifiesto la importancia de algunos pabellones y el papel que ejercieron, ejercen y ejercerán algunos de los construidos para determinadas Exposiciones Universales, pues son el testimonio de que se mantienen vivos, permaneciendo en el tiempo, desempeñando cada uno una función, bien de base para nuevos avances tecnológicos o constructivos, bien para experimentar nuevas formas de habitar, bien para educar, bien para encumbrar a sus autores hasta entonces apenas conocidos. Tanto los que se han mantenido en pie, como los que han sido trasladados y reconstruidos en un nuevo emplazamiento, o incluso los que siguieron su destino fatal y se convirtieron en arquitecturas ausentes, pero que por lo que supusieron de innovación y experimentación, todos han permanecido vivos en la arquitectura de hoy en día. Esta tesis estudia el conjunto de factores que contribuyeron a conferirles esa condición de hito, qué tipo de publicaciones hablan de ellos, en qué términos se tratan y en qué medida los relacionan con la producción de la época y/o de su autor, qué aspectos destacan, cuáles son los valores icónicos que se han ido estableciendo con el paso del tiempo…Qué es lo que permanece. Por otra parte, también aborda en qué medida su condición de construcción efímera, y gracias a su inherente necesidad de desaparecer físicamente, favoreciendo su ausencia en el recuerdo, lo que los ha dotado de representatividad. Esto podría resultar hoy en día algo contradictorio, dado el gran valor concedido a la imagen en la sociedad actual hasta el punto de convertirse en un componente esencial de la representatividad: la imagen sustituye al recuerdo pareciendo que lo que carezca de manifestación física en realidad no existiera, hasta llegar a hacerle perder toda capacidad de representación. Sin embargo, y considerando la imagen como elemento esencial de lo icónico, la reconstrucción de los pabellones una vez concluidas las exposiciones, en muchos casos no ha hecho más que potenciar su valor como arquitecturas efímeras, ya que desposeídos de su carácter temporal, los pabellones de las exposiciones pierden su razón de ser. El Pabellón de España de Corrales y Molezún para la EXPO Bruselas’58 es un claro ejemplo de ello, como se mostrará en el desarrollo de la tesis. En la tesis se exponen los distintos casos de los pabellones elegidos, rastreando, fundamentalmente en las publicaciones periódicas, el papel que en cada uno de ellos ejerció su destino final que, a pesar de no ser el objetivo o fin de la presente tesis, sí podría haber contribuido en algunos casos a dotarle de esa categoría de hito en la historia de la arquitectura. Se trata en definitiva de rastrear las vicisitudes que los han conducido a su condición de referentes arquitectónicos, de hitos de la Historia de la Arquitectura. El estudio se centra en Pabellones de las Exposiciones Universales de Bruselas’58, Montreal’67 y Osaka’70 por dos motivos fundamentales: el primero, su catalogación por el Bureau International des Expositions (BIE) como Exposiciones Universales de 1ª categoría; y el segundo, el período en el que se celebraron, período comprendido entre los años 1945 a 1970, años de profundos y determinantes cambios en la arquitectura y en los que tiene lugar el desarrollo y posterior revisión de la modernidad tras la 2ª Guerra Mundial. Se analiza la trayectoria bibliográfica de los pabellones más nombrados de estas tres Exposiciones Universales que son: de Bruselas ’58, el Pabellón de la República Federal de Alemania, de Egon Eiermann y Sep Ruf; el Pabellón Philips de Le Corbusier, y el Pabellón de España, de José Antonio Corrales y Ramón Molezún; de Montreal ’67, el Pabellón de la República Federal de Alemania, de Frei Otto, y el Pabellón de Estados Unidos, de Richard Buckminster Fuller; y de Osaka ’70, el Theme Pavilion, de Kenzo Tange, el Takara Beautilion, de Kisho Kurokawa, y el Pabellón del Grupo Fuji, de Yutaka Murata. Mediante el análisis se detecta que, ya en las revistas coetáneas a las exposiciones, estos pabellones se señalaban como edificios importantes para la historia de la arquitectura futura. Hecho que se constata con la aparición de los mismos en las historias, incluso en las más recientes, lo que demuestra su condición de hitos en la Historia de la Arquitectura ya consolidada. ABSTRACT Pavilions of the Universal Exhibitions are often considered as ephemeral architecture. However it is worth mentioning that every construction has its time and its extinction period and both of them could be indefinite/infinite, the permanent in the ephemeral. Many of the iconic works of the twentieth century lasted only for a few months, in ephemeral scenarios, changing the course of architecture but not with many images. This leads to question whether their survival under special circumstances or their extinction is mainly due to their experimental nature, and not so much to their ephemeral condition. Pavilions are at the basis of a significant part of the history of contemporary architecture. Specific Universal Exhibitions served as platforms for these landmarks to become myths, be it because of their endurance, or because they no longer exist, or even because in some cases we only have a limited and outdated imagery of them. The different Histories of Architecture highlight the importance of some pavilions and the influence they have had, have and will have some of those that were built for particular Universal Exhibitions. They are a live testimony, lasting over time, playing a specific role as basis for new technological or constructive breakthroughs; to experience new ways of living; or to educate or to raise the profile of their authors hitherto little known. Thanks to their experimental or innovative approach, some pavilions enduring overtime or that have been moved and rebuilt in a new location, or even those that followed their fate and became absent architectures, are still alive in today’s architecture. This thesis analyses the set of elements that contributed to confer the status of landmark to pavilions: what kind of publications speak of them; how they are referred to and the extent to which they are linked to their contemporary production time and / or to their author; what are elements that make them stand out; what are the iconic values that have been established as time goes by and what are those that are still valid…What is it that remains. It also assesses to what extend the condition of pavilion constructions is ephemeral. And finally, what confers them representativeness, giving their inherent need to physically disappear, favoring their absence in the memory. Today this may result somewhat contradictory as the high value of images in contemporary society has made them an essential component of representativeness. They replace remembrances to the point that it seems that what lacks physical manifestation doesn’t exist anymore, and therefore loses representation capacity. However, and considering images as an essential element of what is iconic, in most cases the reconstruction of pavilions upon completion of the exhibitions has leveraged their value as ephemeral architectures; although once deprived of their temporary character, they would lose their reason to exist. The Pavilion of Spain Corrales and Molezún for the Brusels'58 EXPO is a clear example of this, as described in the development of this document. This thesis explores the case of specific pavilions and assesses the role each one had in their final destination, by mainly tracking them in regular publications. Even though the latter is not the objective or the purpose of this thesis, the final destination of these pavilions may have contributed in some cases to grant them their landmark status in the history of architecture. Actually, this thesis is about tracking the events that have led to grant these pavilions their condition as architectural references, as landmark in the history of architecture. The study focuses on pavilions of the Universal Exhibition Brussels'58, Montreal'67 and Osaka'70 for two main reasons: first, their classification by the Bureau International des Expositions (BIE) and Universal Exhibitions 1st category; and second, the period in which they were held, from 1945 to 1970, a time of profound and decisive changes in the architecture and in the development and subsequent revision of modernity after the II World. It analyzes the bibliographic path of the most cited pavilions in the three Universal Exhibitions: in Brussels '58, the pavilion of the RFA by Egon Eiermann and Sep Rup, the pavilion of Philips by Le Corbusier and the Spain pavilion from José Antonio Corrales and Ramón Molezún; in Montreal '67 the pavilion of RFA by Frei Otto and the United States pavilion by Richard Buckminster Fuller; and in Osaka '70, the Theme Pavilion by Kenzo Tange, the Takara Beautilion by Kisho Kurokawa and the Fuji Group pavilion by Yutaka Murata. Through the analysis it is noticeable that in the contemporary publications to the exhibitions, these pavilions were already signaled out as relevant buildings to the future architecture history. The fact that they became part of the history themselves, even in the most recent times, is a prove of their condition as milestones of the consolidated History of Architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the visual cortex, as elsewhere, N-methyl-d-aspartate receptors (NMDARs) play a critical role in triggering long-term, experience-dependent synaptic plasticity. Modifications of NMDAR subunit composition alter receptor function, and could have a large impact on the properties of synaptic plasticity. We have used immunoblot analysis to investigate the effects of age and visual experience on the expression of different NMDAR subunits in synaptoneurosomes prepared from rat visual cortices. NMDARs at birth are comprised of NR2B and NR1 subunits, and, over the first 5 postnatal weeks, there is a progressive inclusion of the NR2A subunit. Dark rearing from birth attenuates the developmental increase in NR2A. Levels of NR2A increase rapidly (in <2 hr) when dark-reared animals are exposed to light, and decrease gradually over the course of 3 to 4 days when animals are deprived of light. These data reveal that NMDAR subunit composition in the visual cortex is remarkably dynamic and bidirectionally regulated by sensory experience. We propose that NMDAR subunit regulation is a mechanism for experience-dependent modulation of synaptic plasticity in the visual cortex, and serves to maintain synaptic strength within an optimal dynamic range.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A hierarchical order of gene expression has been proposed to control developmental events in hematopoiesis, but direct demonstration of the temporal relationships between regulatory gene expression and differentiation has been difficult to achieve. We modified a single-cell PCR method to detect 2-fold changes in mRNA copies per cell (dynamic range, 250–250,000 copies/cell) and used it to sequentially quantitate gene expression levels as single primitive (CD34+,CD38−) progenitor cells underwent differentiation to become erythrocytes, granulocytes, or monocyte/macrophages. Markers of differentiation such as CD34 or cytokine receptor mRNAs and transcription factors associated with their regulation were assessed. All transcription factors tested were expressed in multipotent progenitors. During lineage-specific differentiation, however, distinct patterns of expression emerged. SCL, GATA-2, and GATA-1 expression sequentially extinguished during erythroid differentiation. PU.1, AML1B, and C/EBPα expression profiles and their relationship to cytokine receptor expression in maturing granulocytes could be distinguished from similar profiles in monocytic cells. These data characterize the dynamics of gene expression accompanying blood cell development and define a signature gene expression pattern for specific stages of hematopoietic differentiation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A single mossy fiber input contains several release sites and is located on the proximal portion of the apical dendrite of CA3 neurons. It is, therefore, well suited to exert a strong influence on pyramidal cell excitability. Accordingly, the mossy fiber synapse has been referred to as a detonator or teacher synapse in autoassociative network models of the hippocampus. The very low firing rates of granule cells [Jung, M. W. & McNaughton, B. L. (1993) Hippocampus 3, 165–182], which give rise to the mossy fibers, raise the question of how the mossy fiber synapse temporally integrates synaptic activity. We have therefore addressed the frequency dependence of mossy fiber transmission and compared it to associational/commissural synapses in the CA3 region of the hippocampus. Paired pulse facilitation had a similar time course, but was 2-fold greater for mossy fiber synapses. Frequency facilitation, during which repetitive stimulation causes a reversible growth in synaptic transmission, was markedly different at the two synapses. At associational/commissural synapses facilitation occurred only at frequencies greater than once every 10 s and reached a magnitude of about 125% of control. At mossy fiber synapses, facilitation occurred at frequencies as low as once every 40 s and reached a magnitude of 6-fold. Frequency facilitation was dependent on a rise in intraterminal Ca2+ and activation of Ca2+/calmodulin-dependent kinase II, and was greatly reduced at synapses expressing mossy fiber long-term potentiation. These results indicate that the mossy fiber synapse is able to integrate granule cell spiking activity over a broad range of frequencies, and this dynamic range is substantially reduced by long-term potentiation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser-polarized gases (3He and 129Xe) are currently being used in magnetic resonance imaging as strong signal sources that can be safely introduced into the lung. Recently, researchers have been investigating other tissues using 129Xe. These studies use xenon dissolved in a carrier such as lipid vesicles or blood. Since helium is much less soluble than xenon in these materials, 3He has been used exclusively for imaging air spaces. However, considering that the signal of 3He is more than 10 times greater than that of 129Xe for presently attainable polarization levels, this work has focused on generating a method to introduce 3He into the vascular system. We addressed the low solubility issue by producing suspensions of 3He microbubbles. Here, we provide the first vascular images obtained with laser-polarized 3He. The potential increase in signal and absence of background should allow this technique to produce high-resolution angiographic images. In addition, quantitative measurements of blood flow velocity and tissue perfusion will be feasible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cochlear outer hair cells (OHCs) are responsible for the exquisite sensitivity, dynamic range, and frequency-resolving capacity of the mammalian hearing organ. These unique cells respond to an electrical stimulus with a cycle-by-cycle change in cell length that is mediated by molecular motors in the cells' basolateral membrane. Recent work identified prestin, a protein with similarity to pendrin-related anion transporters, as the OHC motor molecule. Here we show that heterologously expressed prestin from rat OHCs (rprestin) exhibits reciprocal electromechanical properties as known for the OHC motor protein. Upon electrical stimulation in the microchamber configuration, rprestin generates mechanical force with constant amplitude and phase up to a stimulus frequency of at least 20 kHz. Mechanical stimulation of rprestin in excised outside-out patches shifts the voltage dependence of the nonlinear capacitance characterizing the electrical properties of the molecule. The results indicate that rprestin is a molecular motor that displays reciprocal electromechanical properties over the entire frequency range relevant for mammalian hearing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, we implement chronic optical imaging of intrinsic signals in rat barrel cortex and repeatedly quantify the functional representation of a single whisker over time. The success of chronic imaging for more than 1 month enabled an evaluation of the normal dynamic range of this sensory representation. In individual animals for a period of several weeks, we found that: (i) the average spatial extent of the quantified functional representation of whisker C2 is surprisingly large--1.71 mm2 (area at half-height); (ii) the location of the functional representation is consistent; and (iii) there are ongoing but nonsystematic changes in spatiotemporal characteristics such as the size, shape, and response amplitude of the functional representation. These results support a modified description of the functional organization of barrel cortex, where although a precisely located module corresponds to a specific whisker, this module is dynamic, large, and overlaps considerably with the modules of many other whiskers.