291 resultados para wavefront steepness


Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To compare the performance of dynamic contour tonometry (DCT) and Goldmann applanation tonometry (GAT) in measuring intraocular pressure in eyes with irregular corneas. METHODS: GAT and DCT measures were taken in 30 keratoconus and 29 postkeratoplasty eyes of 35 patients after pachymetry and corneal topography. Regression and correlation analyses were performed between both tonometry methods and between tonometry methods and corneal parameters. Bland-Altman plots were constructed. RESULTS: DCT values were significantly higher than GAT values in both study groups: +4.1 +/- 2.3 mm Hg (mean +/- SD) in keratoconus and +3.1 +/- 2.5 mm Hg after keratoplasty. In contrast to DCT, GAT values were significantly higher in postkeratoplasty eyes than in keratoconus. The correlation between the 2 tonometry methods was moderate in keratoconus (Kendall correlation coefficient, tau = 0.34) as well in postkeratoplasty eyes (tau = 0.66). The +/-1.96 SD span of the DCT-GAT differences showed a considerable range: -0.42 to +8.70 mm Hg in keratoconus and -1.87 to +7.98 mm Hg in postkeratoplasty eyes. In the keratoconus group, neither DCT nor GAT correlated significantly with any of the corneal parameters. In the postkeratoplasty group, both DCT and GAT measures showed a moderate positive correlation with corneal steepness, but only DCT had a significant negative correlation with the central corneal thickness (tau = -0.33). CONCLUSIONS: DCT measured significantly higher intraocular pressures than GAT in keratoconus and postkeratoplasty eyes. DCT and GAT measures varied considerably, and DCT was not less dependent on biomechanical properties of irregular corneas than GAT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many regions, tectonic uplift is the main driver of erosion over million-year (Myr) timescales, but climate changes can markedly affect the link between tectonics and erosion, causing transient variations in erosion rates. Here we study the driving forces of millennial to Myr-scale erosion rates in the French Western Alps, as estimated from in situ produced cosmogenic 10Be and a newly developed approach integrating detrital and bedrock apatite fission-track thermochronology. Millennial erosion rates from 10Be analyses vary between ~0.27 and ~1.33 m/kyr, similar to rates measured in adjacent areas of the Alps. Significant positive correlations of millennial erosion rates with geomorphic measures, in particular with the LGM ice thickness, reveal a strong transient morphological and erosional perturbation caused by repeated Quaternary glaciations. The perturbation appears independent of Myr-scale uplift and erosion gradients, with the effect that millennial erosion rates exceed Myr-scale erosion rates only in the internal Alps where the latter are low (<0.4 km/Myr). These areas, moreover, exhibit channels that clearly plot above a general linear positive relation between Myr-scale erosion rates and normalized steepness index. Glacial erosion acts irrespective of rock uplift and thus not only leads to an overall increase in erosion rates but also regulates landscape morphology and erosion rates in regions with considerable spatial gradients in Myr-scale tectonic uplift. Our study demonstrates that climate change, e.g., through occurrence of major glaciations, can markedly perturb landscape morphology and related millennial erosion rate patterns, even in regions where Myr-scale erosion rates are dominantly controlled by tectonics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Landscape evolution and surface morphology in mountainous settings are a function of the relative importance between sediment transport processes acting on hillslopes and in channels, modulated by climate variables. The Niesen nappe in the Swiss Penninic Prealps presents a unique setting in which opposite facing flanks host basins underlain by identical lithologies, but contrasting litho-tectonic architectures where lithologies either dip parallel to the topographic slope or in the opposite direction (i.e. dip slope and non-dip slope). The north-western facing Diemtigen flank represents such a dip slope situation and is characterized by a gentle topography, low hillslope gradients, poorly dissected channels, and it hosts large landslides. In contrast, the south-eastern facing Frutigen side can be described as non-dip slope flank with deeply incised bedrock channels, high mean hillslope gradients and high relief topography. Results from morphometric analysis reveal that noticeable differences in morphometric parameters can be related to the contrasts in the relative importance of the internal hillslope-channel system between both valley flanks. While the contrasting dip-orientations of the underlying flysch bedrock has promoted hillslope and channelized processes to contrasting extents and particularly the occurrence of large landslides on the dip slope flank, the flank averaged beryllium-10 (10Be)-derived denudation rates are very similar and range between 0.20 and 0.26 mm yr−1. In addition, our denudation rates offer no direct relationship to basin's slope, area, steepness or concavity index, but reveal a positive correlation to mean basin elevation that we interpret as having been controlled by climatically driven factors such as frost-induced processes and orographic precipitation. Our findings illustrate that while the landscape properties in this part of the northern Alpine border can mainly be related to the tectonic architecture of the underlying bedrock, the denudation rates have a strong orographic control through elevation dependent mean annual temperature and precipitation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Implementing the plasma-lasing potential for tabletop nano-imaging on across a hot plasma medium drives short-wavelength lasing, promising for "turnkey" nano-imaging setups. A systematic study of the illumination characteristics, combined with design-adapted objectives, is presented. It is shown how the ultimate nano-scale feature is dictated by either the diffraction-limited or the wavefront-limited resolution, which imposed a combined study of both the source and the optics. For nano-imaging, the spatial homogeneity of the illumination (spot noise) was shown as critical. Plasma-lasing from a triple grazing-incidence pumping scheme compensated for the missing spot homogeneity in classical schemes. We demonstrate that a collimating mirror pre-conditions both the pointing stability and the divergence below half a mrad.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Individuals differ widely in how steeply they discount future rewards. The sources of these stable individual differences in delay discounting (DD) are largely unknown. One candidate is the COMT Val158Met polymorphism, known to modulate prefrontal dopamine levels and affect DD. To identify possible neural mechanisms by which this polymorphism may contribute to stable individual DD differences, we measured 73 participants' neural baseline activation using resting electroencephalogram (EEG). Such neural baseline activation measures are highly heritable and stable over time, thus an ideal endophenotype candidate to explain how genes may influence behavior via individual differences in neural function. After EEG-recording, participants made a series of incentive-compatible intertemporal choices to determine the steepness of their DD. We found that COMT significantly affected DD and that this effect was mediated by baseline activation level in the left dorsal prefrontal cortex (DPFC): (i) COMT had a significant effect on DD such that the number of Val alleles was positively correlated with steeper DD (higher numbers of Val alleles means greater COMT activity and thus lower dopamine levels). (ii) A whole-brain search identified a cluster in left DPFC where baseline activation was correlated with DD; lower activation was associated with steeper DD. (iii) COMT had a significant effect on the baseline activation level in this left DPFC cluster such that a higher number of Val alleles was associated with lower baseline activation. (iv) The effect of COMT on DD was explained by the mediating effect of neural baseline activation in the left DPFC cluster. Our study thus establishes baseline activation level in left DPFC as salient neural signature in the form of an endophenotype that mediates the link between COMT and DD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Altered gap junctional coupling potentiates slow conduction and arrhythmias. To better understand how heterogeneous connexin expression affects conduction at the cellular scale, we investigated conduction in tissue consisting of two cardiomyocyte populations expressing different connexin levels. Conduction was mapped using microelectrode arrays in cultured strands of foetal murine ventricular myocytes with prede fi ned contents of connexin 43 knockout (Cx43KO) cells. Corresponding computer simulations were run in randomly generated two-dimensional tissues mimicking the cellular architecture of the strands. In the cultures, the relationship between conduction velocity (CV) and Cx43KO cell content was nonlinear. CV fi rst decreased signi fi cantly when Cx43KO content was increased from 0 to 50%. When the Cx43KO content was ≥ 60%, CV became comparabletothatin100%Cx43KOstrands.Co-culturingCx43KOandwild-typecellsalsoresultedinsigni fi cantly more heterogeneous conduction patterns and in frequent conduction blocks. The simulations replicated this behaviour of conduction. For Cx43KO contents of 10 – 50%, conduction was slowed due to wavefront meandering between Cx43KO cells. For Cx43KO contents ≥ 60%, clusters of remaining wild-type cells acted as electrical loads thatimpairedconduction.ForCx43KOcontentsof40 – 60%,conductionexhibitedfractal characteristics,wasprone to block, and was more sensitive to changes in ion currents compared to homogeneous tissue. In conclusion, conduction velocity and stability behave in a nonline ar manner when cardiomyocytes expressing different connexin amounts are combined. This behaviour results from heterogeneous current-to-load relationships at the cellular level. Such behaviour is likely to be arrhythmogenic in various clinical contexts in which gap junctional coupling is heterogeneous.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent sediments of the Stromboli Canyon sides features of mineral and grain size compositions, redox conditions, behavior of Fe, Mn, organic carbon, Mo, and W in an environment of active input of pyroclastic material are considered. Differences in conditions of sedimentation and early diagenesis in the east and west sides of the canyon depending on position of the prevailing direction of drift and steepness of the slopes, as well as types of differentiation of detrital material in sediments under conditions of permanent vibrations are specified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plankton pump samples and plankton tows (size fractions between 0.04 mm and 1.01 mm) from the eastern North Atlantic Ocean contain the following shell- and skeleton-producing planktonic and nektonic organisms, which can be fossilized in the sediments: diatoms, radiolarians, foraminifers, pteropods, heteropods, larvae of benthic gastropods and bivalves, ostracods, and fish. The abundance of these components has been mapped quantitatively in the eastern North Atlantic surface waters in October - December 1971. More ash (after ignition of the organic matter, consisting mostly of these components) per cubic meter of water is found close to land masses (continents and islands) and above shallow submarine elevations than in the open ocean. Preferred biotops of planktonic diatoms in the region described are temperate shallow water and tropical coastal upwelling areas. Radiolarians rarely occur close to the continent, but are abundant in pelagic warm water masses, even near islands. Foraminifers are similar to the radiolarians, rarer in the coastal water mass of the continent than in the open ocean or off oceanic islands. Their abundance is highest outside the upwelling area off NW Africa. Molluscs generally outnumber planktonic foraminifers, implying that the carbonate cycle of the ocean might be influenced considerably by these animals. The molluscs include heteropods, pteropods, and larvae of benthic bivalves and gastropods. Larvae of benthic molluscs occur more frequently close to continental and island margins and above submarine shoals (in this case mostly guyots) than in the open ocean. Their size increases, but they decrease in number with increasing distance from their area of origin. Ostracods and fish have only been found in small numbers concentrated off NW Africa. All of the above-mentioned components occur in higher abundances in the surface water than in subsurface waters. They are closely related to the hydrography of the sampled water masses (here defined through temperature measurements). Relatively warm water masses of the southeastern branches of the Gulf Stream system transport subtropical and southern temperate species to the Bay of Biscay, relatively cool water masses of the Portugal and Canary Currents carry transitional faunal elements along the NW African coast southwards to tropical regions. These mix in the northwest African upwelling area with tropical faunal elements which are generally assumed to live in the subsurface water masses and which probably have been transported northwards to this area by a subsurface counter current. The faunas typical for tropical surface water masses are not only reduced due to the tongue of cool water extending southwards along the coast, but they are also removed from the coastal zone by the upwelling subsurface water masses carrying their own shell and skeleton assemblages. Tropical water masses contain much more shelland skeleton-producing plankters than subtropical and temperate ones. The climatic conditions found at different latitudes control the development and intensity of a separate continental coastal water mass with its own plankton assemblages. Extent of this water mass and steepness of gradients between the pelagic and coastal environment limit the occurrence of pelagic plankton close to the continental coast. A similar water mass in only weakly developed off oceanic islands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Submarine slope failures of various types and sizes are common along the tectonic and seismically active Ligurian margin, northwestern Mediterranean Sea, primarily because of seismicity up to ~M6, rapid sediment deposition in the Var fluvial system, and steepness of the continental slope (average 11°). We present geophysical, sedimentological and geotechnical results of two distinct slides in water depth >1,500 m: one located on the flank of the Upper Var Valley called Western Slide (WS), another located at the base of continental slope called Eastern Slide (ES). WS is a superficial slide characterized by a slope angle of ~4.6° and shallow scar (~30 m) whereas ES is a deep-seated slide with a lower slope angle (~3°) and deep scar (~100 m). Both areas mainly comprise clayey silt with intermediate plasticity, low water content (30-75 %) and underconsolidation to strong overconsolidation. Upslope undeformed sediments have low undrained shear strength (0-20 kPa) increasing gradually with depth, whereas an abrupt increase in strength up to 200 kPa occurs at a depth of ~3.6 m in the headwall of WS and ~1.0 m in the headwall of ES. These boundaries are interpreted as earlier failure planes that have been covered by hemipelagite or talus from upslope after landslide emplacement. Infinite slope stability analyses indicate both sites are stable under static conditions; however, slope failure may occur in undrained earthquake condition. Peak earthquake acceleration from 0.09 g on WS and 0.12 g on ES, i.e. M5-5.3 earthquakes on the spot, would be required to induce slope instability. Different failure styles include rapid sedimentation on steep canyon flanks with undercutting causing superficial slides in the west and an earthquake on the adjacent Marcel fault to trigger a deep-seated slide in the east.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While multichannel configurations are well established for non-imaging applications, they have not been used yet for imaging applications. In this paper we present for the first time some of multichannel designs for imaging systems. The multichannel comprises discontinuous optical sections which are called channels. The phase-space representation of the bundle of rays going from the object to the image is discontinuous between channels. This phase-space ray-bundle flow is divided in as many paths as channels there are but it is a single wavefront both at the source and the target. Typically, these multichannel systems are at least formed by three optical surfaces: two of them have discontinuities (either in the shape or in the shape derivative) while the last is a smooth one. Optical surfaces discontinuities cause at the phase space the wave front split in separate paths. The number of discontinuities is the same in the two first surfaces: Each channel is defined by the smooth surfaces in between discontinuities, so the surfaces forming each separate channel are all smooth. Aplanatic multichannel designs are also shown and used to explain the design procedure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We envision that dynamic multiband transmissions taking advantage of the receiver diversity (even for collocated antennas with different polarization or radiation pattern) will create a new paradigm for these links guaranteeing high quality and reliability. However, there are many challenges to face regarding the use of broadband reception where several out of band (with respect to multiband transmission) strong interferers, but still within the acquisition band, may limit dramatically the expected performance. In this paper we address this problem introducing a specific capability of the communication system that is able to mitigate these interferences using analog beamforming principles. Indeed, Higher Order Crossing (HOCs) joint statistics of the Single Input ? Multiple Output (SIMO) system are shown to effectively determine the angle on arrival of the wavefront even operating over highly distorted signals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Magnetic excitation of whistlers by a square array of electrodynamic tethers is discussed. The array is made of perpendicular rows of tethers that carry equal, uniform, and time-modulated currents at equal frequency with a 90° phase shift. The array would fly vertical in the orbital equatorial plane, which is perpendicular to the geomagnetic field B0 when its tilt is ignored. The array radiates a whistler wave along B0. A parametric instability due to pumping by the background magnetic field through the radiated wave gives rise to two unstable coupled whistler perturbations. The growth rate is maximum for perturbations with wave vector at angles 38.36° and 75.93° from B0. For an experiment involving a wavefront that moves with the orbiting array, which might serve to study nonlinear wave interactions and turbulence in space plasmas, characteristic values of growth rate and parameters, such as the number of tethers and their dimensions and distances in the array, are discussed for low Earth orbit ambient conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel tunable liquid crystal microaxicon array is proposed and experimentally demonstrated. The proposed structure is capable of generating tunable axicons (thousands of elements) of micrometric size, with simple control (four control voltages) and low voltage, and is totally reconfigurable. Depending on the applied voltages, control over the diameter, as well as the effective wedge angle, can be achieved. Controls over the diameter ranging from 107 to 77 μm have been demonstrated. In addition, a control over the phase profile tunability, from 12π to 24π radians, has been demonstrated. This result modifies the effective cone angle. The diameter tunability, as well the effective cone angle, results in a control over the nondiffractive Bessel beam distance. The RMS wavefront deviation from the ideal axicon is only λ∕3. The proposed device has several advantages over the existing microaxicon arrays, including being simple having a low cost. The device could contribute to developing new applications and to reducing the fabrication costs of current devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las playas sustentadas por medio de un pie sumergido son una atractiva alternativa de diseño de regeneración de playas especialmente cuando las condiciones física del emplazamiento o las características de la arena nativa y de préstamo producen perfiles de alimentación que no se intersectan. La observación y propuesta de este tipo de solución data de los años 1960’s, así como la experiencia internacional en la construcción de este tipo de playas. Sin embargo, a pesar de su utilización y los estudios en campo y laboratorio, no se dispone de criterios ingenieriles que apoyen el diseño de las mismas. Esta tesis consiste en un análisis experimental del perfil de playas sustentadas en un pie sumergido (o colgadas) que se concreta en una propuesta de directrices de diseño general que permiten estimar la ubicación y características geométricas del pie sumergido frente a un oleaje y material que constituye la playa determinados. En la tesis se describe el experimento bidimensional realizado en el modelo físico de fondo móvil, donde se combinan cinco tipos de oleaje con tres configuraciones del pie sumergido (“Sin estructura”, configuración baja o “Estructura 1” y configuración alta o “Estructura 2”), se presentan los resultados obtenidos y se realiza una discusión detallada de las implicaciones de los resultados desde el punto de vista hidrodinámico utilizando monomios adimensionales. Se ha realizado un análisis detallado del estado del arte sobre playas colgadas, presentando el concepto y las experiencias de realizaciones en distintos países. Se ha realizado una cuidadosa revisión de la literatura publicada sobre estudios experimentales de playas colgadas, modelos teóricos y otras cuestiones auxiliares, necesarias para la formulación de la metodología de la tesis. El estudio realizado se ha estructurado en dos fases. En la primera fase se ha realizado una experimentación en un modelo físico de fondo móvil construido en las instalaciones del Centro de Estudios de Puertos y Costas (CEPYC) del Centro de Estudios y Experimentación de Obras Públicas (CEDEX), consistente en un canal de 36 m de longitud, 3 m de anchura y 1.5 m de altura, provisto de un generador de oleaje de tipo pistón. Se ha diseñado una campaña de 15 ensayos, que se obtienen sometiendo a cinco tipos de oleaje tres configuraciones distintas de playa colgada. En los ensayos se ha medido el perfil de playa en distintos instantes hasta llegar al equilibrio, determinando a partir de esos datos el retroceso de la línea de costa y el volumen de sedimentos perdido. El tiempo total efectivo de ensayo asciende a casi 650 horas, y el número de perfiles de evolución de playa obtenidos totaliza 229. En la segunda fase se ha abordado el análisis de los resultados obtenidos con la finalidad de comprender el fenómeno, identificar las variables de las que depende y proponer unas directrices ingenieriles de diseño. Se ha estudiado el efecto de la altura de ola, del periodo del oleaje, del francobordo adimensional y del parámetro de Dean, constatándose la dificultad de comprensión del funcionamiento de estas obras ya que pueden ser beneficiosas, perjudiciales o inocuas según los casos. También se ha estudiado la respuesta del perfil de playa en función de otros monomios adimensionales, tales como el número de Reynolds o el de Froude. En el análisis se ha elegido el monomio “plunger” como el más significativo, encontrando relaciones de éste con el peralte de oleaje, la anchura de coronación adimensional, la altura del pie de playa adimensional y el parámetro de Dean. Finalmente, se propone un método de diseño de cuatro pasos que permite realizar un primer encaje del diseño funcional de la playa sustentada frente a un oleaje de características determinadas. Las contribuciones más significativas desde el punto de vista científico son: - La obtención del juego de resultados experimentales. - La caracterización del comportamiento de las playas sustentadas. - Las relaciones propuestas entre el monomio plunger y las distintas variables explicativas seleccionadas, que permiten predecir el comportamiento de la obra. - El método de diseño propuesto, en cuatro pasos, para este tipo de esquemas de defensa de costas. Perched beaches are an attractive beach nourishment design alternative especially when either the site conditions or the characteristics of both the native and the borrow sand lead to a non-intersecting profile The observation and suggestion of the use of this type of coastal defence scheme dates back to the 1960’s, as well as the international experience in the construction of this type of beaches. However, in spite of its use and the field and laboratory studies performed to-date, no design engineering guidance is available to support its design. This dissertation is based on the experimental work performed on a movable bed physical model and the use of dimensionless parameters in analyzing the results to provide general functional design guidance that allow the designer, at a particular stretch of coast - to estimate the location and geometric characteristics of the submerged sill as well as to estimate the suitable sand size to be used in the nourishment. This dissertation consists on an experimental analysis of perched beaches by means of a submerged sill, leading to the proposal of general design guidance that allows to estimate the location and geometric characteristics of the submerged sill when facing a wave condition and for a given beach material. The experimental work performed on a bi-dimensional movable bed physical model, where five types of wave conditions are combined with three configurations of the submerged sill (“No structure”, low structure or “Structure 1”, and high structure or “Structure 2”) is described, results are presented, and a detailed discussion of the results - from the hydrodynamic point of view – of the implications of the results by using dimensionless parameters is carried out. A detailed state of the art analysis about perched beaches has been performed, presenting the “perched beach concept” and the case studies of different countries. Besides, a careful revision of the literature about experimental studies on perched beaches, theoretical models, and other topics deemed necessary to formulate the methodology of this work has been completed. The study has been divided into two phases. Within the first phase, experiments on a movable-bed physical model have been developed. The physical model has been built in the Centro de Estudios de Puertos y Costas (CEPYC) facilities, Centro de Estudios y Experimentación de Obras Públicas (CEDEX). The wave flume is 36 m long, 3 m wide and 1.5 m high, and has a piston-type regular wave generator available. The test plan consisted of 15 tests resulting from five wave conditions attacking three different configurations of the perched beach. During the development of the tests, the beach profile has been surveyed at different intervals until equilibrium has been reached according to these measurements. Retreat of the shoreline and relative loss of sediment in volume have been obtained from the measurements. The total effective test time reaches nearly 650 hours, whereas the total number of beach evolution profiles measured amounts to 229. On the second phase, attention is focused on the analysis of results with the aim of understanding the phenomenon, identifying the governing variables and proposing engineering design guidelines. The effect of the wave height, the wave period, the dimensionless freeboard and of the Dean parameter have been analyzed. It has been pointed out the difficulty in understanding the way perched beaches work since they turned out to be beneficial, neutral or harmful according to wave conditions and structure configuration. Besides, the beach profile response as a function of other dimensionless parameters, such as Reynolds number or Froude number, has been studied. In this analysis, the “plunger” parameter has been selected as the most representative, and relationships between the plunger parameter and the wave steepness, the dimensionless crest width, the dimensionless crest height, and the Dean parameter have been identified. Finally, an engineering 4-step design method has been proposed, that allows for the preliminary functional design of the perched beach for a given wave condition. The most relevant contributions from the scientific point of view have been: - The acquisition of a consistent set of experimental results. - The characterization of the behavior of perched beaches. - The proposed relationships between the plunger parameter and the different explanatory variables selected, that allow for the prediction of the beach behavior. - The proposed design method, four-step method, for this type of coastal defense schemes.