943 resultados para Two dimensional pattern optical transfer


Relevância:

100.00% 100.00%

Publicador:

Resumo:

SRI is unique among known photoreceptors in that it produces opposite signals depending on the color of light stimuli. Absorption of orange light (587 nm) triggers an attractant response by the cell, whereas absorption of orange light followed by near-UV light (373 run) triggers a repellent response. Using behavioral mutants that exhibit aberrant color-sensing ability, we tested a two-conformation equilibrium model, using FRET and EPR spectroscopy. The essence of the model applied to SRI-HtrI is that the complex exists in a metastable two-conformer equilibrium which is shifted in one direction by orange light absorption (producing an attractant signal) and in the opposite direction by a second UV-violet photon (producing a repellent signal). First, by FRET we found that the E-F cytoplasmic loop of SRI moves toward the RAMP domain of the HtrI transducer during the formation of the orange-light activated signaling state of the complex. This is the first localization of a change in the physical relationship between the receptor and transducer subunits of the complex and provides a structural property of the two proposed conformers that we can monitor. Second, EPR spectra of a spin label probe at this cytoplasmic position showed shifts in the dark in the mutants toward shorter or longer EF loop-RAMP distances, explaining their behavior in terms of their mutations causing pre-stimulus shifts into one or the other conformer. ^ Next, we applied a novel electrophysiological method for monitoring the directionality of proton movement during photoactivation of SRI, to investigate the process of proton transfer in the photoactive site from the chromophore to proton acceptors on both the wildtype and aberrant color-response mutants. We observed an unexpected and critical difference in the two signaling conformations of the SRI-HtrI complex. The finding is that the vectoriality (i.e. movement away or toward the cytoplasm) of the light-induced proton transfer from the chromophore to the protein is opposite in formation of the two conformations. Retinylidene proton transfer is a common critical process in rhodopsins and these results are the first to show differences in vectoriality in a rhodopsin receptor, and to demonstrate functional importance of the direction of proton transfer. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual cortex of macaque monkeys consists of a large number of cortical areas that span the occipital, parietal, temporal, and frontal lobes and occupy more than half of cortical surface. Although considerable progress has been made in understanding the contributions of many occipital areas to visual perceptual processing, much less is known concerning the specific functional contributions of higher areas in the temporal and frontal lobes. Previous behavioral and electrophysiological investigations have demonstrated that the inferotemporal cortex (IT) is essential to the animal's ability to recognize and remember visual objects. While it is generally recognized that IT consists of a number of anatomically and functionally distinct visual-processing areas, there remains considerable controversy concerning the precise number, size, and location of these areas. Therefore, the precise delineation of the cortical subdivisions of inferotemporal cortex is critical for any significant progress in the understanding of the specific contributions of inferotemporal areas to visual processing. In this study, anterograde and/or retrograde neuroanatomical tracers were injected into two visual areas in the ventral posterior and central portions of IT (areas PITv and CITvp) to elucidate the corticocortical connections of these areas with well known areas of occipital cortex and with less well understood regions of inferotemporal cortex. The locations of injection sites and the delineation of the borders of many occipital areas were aided by the pattern of interhemispheric connections, revealed following callosal transection and subsequent labeling with HRP. The resultant patterns of connections were represented on two-dimensional computational (CARET) and manual cortical maps and the laminar characteristics and density of the projection fields were quantified. The laminar and density features of these corticocortical connections demonstrate thirteen anatomically distinct subdivisions or areas distributed within the superior temporal sulcus and across the inferotemporal gyrus. These results serve to refine previous descriptions of inferotemporal areas, validate recently identified areas, and provide a new description of the hierarchical relationships among occipitotemporal cortical areas in macaques. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ocean acidification (OA) is beginning to have noticeable negative impact on calcification rate, shell structure and physiological energy budgeting of several marine organisms; these alter the growth of many economically important shellfish including oysters. Early life stages of oysters may be particularly vulnerable to OA-driven low pH conditions because their shell is made up of the highly soluble form of calcium carbonate (CaCO3) mineral, aragonite. Our long-term CO2 perturbation experiment showed that larval shell growth rate of the oyster species Crassostrea hongkongensis was significantly reduced at pH < 7.9 compared to the control (8.2). To gain new insights into the underlying mechanisms of low-pH-induced delays in larval growth, we have examined the effect of pH on the protein expression pattern, including protein phosphorylation status at the pediveliger larval stage. Using two-dimensional electrophoresis and mass spectrometry, we demonstrated that the larval proteome was significantly altered by the two low pH treatments (7.9 and 7.6) compared to the control pH (8.2). Generally, the number of expressed proteins and their phosphorylation level decreased with low pH. Proteins involved in larval energy metabolism and calcification appeared to be down-regulated in response to low pH, whereas cell motility and production of cytoskeletal proteins were increased. This study on larval growth coupled with proteome change is the first step toward the search for novel Protein Expression Signatures indicative of low pH, which may help in understanding the mechanisms involved in low pH tolerance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In low-accumulation regions, the reliability of d18O-derived temperature signals from ice cores within the Holocene is unclear, primarily due to the small climate changes relative to the intrinsic noise of the isotopic signal. In order to learn about the representativity of single ice cores and to optimise future ice-core-based climate reconstructions, we studied the stable-water isotope composition of firn at Kohnen station, Dronning Maud Land, Antarctica. Analysing d18O in two 50 m long snow trenches allowed us to create an unprecedented, two-dimensional image characterising the isotopic variations from the centimetre to the hundred-metre scale. This data set includes the complete trench oxygen isotope record together with the meta data used in the study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasing amounts of data is collected in most areas of research and application. The degree to which this data can be accessed, analyzed, and retrieved, is a decisive in obtaining progress in fields such as scientific research or industrial production. We present a novel methodology supporting content-based retrieval and exploratory search in repositories of multivariate research data. In particular, our methods are able to describe two-dimensional functional dependencies in research data, e.g. the relationship between ination and unemployment in economics. Our basic idea is to use feature vectors based on the goodness-of-fit of a set of regression models to describe the data mathematically. We denote this approach Regressional Features and use it for content-based search and, since our approach motivates an intuitive definition of interestingness, for exploring the most interesting data. We apply our method on considerable real-world research datasets, showing the usefulness of our approach for user-centered access to research data in a Digital Library system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new method is presented to generate reduced order models (ROMs) in Fluid Dynamics problems of industrial interest. The method is based on the expansion of the flow variables in a Proper Orthogonal Decomposition (POD) basis, calculated from a limited number of snapshots, which are obtained via Computational Fluid Dynamics (CFD). Then, the POD-mode amplitudes are calculated as minimizers of a properly defined overall residual of the equations and boundary conditions. The method includes various ingredients that are new in this field. The residual can be calculated using only a limited number of points in the flow field, which can be scattered either all over the whole computational domain or over a smaller projection window. The resulting ROM is both computationally efficient(reconstructed flow fields require, in cases that do not present shock waves, less than 1 % of the time needed to compute a full CFD solution) and flexible(the projection window can avoid regions of large localized CFD errors).Also, for problems related with aerodynamics, POD modes are obtained from a set of snapshots calculated by a CFD method based on the compressible Navier Stokes equations and a turbulence model (which further more includes some unphysical stabilizing terms that are included for purely numerical reasons), but projection onto the POD manifold is made using the inviscid Euler equations, which makes the method independent of the CFD scheme. In addition, shock waves are treated specifically in the POD description, to avoid the need of using a too large number of snapshots. Various definitions of the residual are also discussed, along with the number and distribution of snapshots, the number of retained modes, and the effect of CFD errors. The method is checked and discussed on several test problems that describe (i) heat transfer in the recirculation region downstream of a backwards facing step, (ii) the flow past a two-dimensional airfoil in both the subsonic and transonic regimes, and (iii) the flow past a three-dimensional horizontal tail plane. The method is both efficient and numerically robust in the sense that the computational effort is quite small compared to CFD and results are both reasonably accurate and largely insensitive to the definition of the residual, to CFD errors, and to the CFD method itself, which may contain artificial stabilizing terms. Thus, the method is amenable for practical engineering applications. Resumen Se presenta un nuevo método para generar modelos de orden reducido (ROMs) aplicado a problemas fluidodinámicos de interés industrial. El nuevo método se basa en la expansión de las variables fluidas en una base POD, calculada a partir de un cierto número de snapshots, los cuales se han obtenido gracias a simulaciones numéricas (CFD). A continuación, las amplitudes de los modos POD se calculan minimizando un residual global adecuadamente definido que combina las ecuaciones y las condiciones de contorno. El método incluye varios ingredientes que son nuevos en este campo de estudio. El residual puede calcularse utilizando únicamente un número limitado de puntos del campo fluido. Estos puntos puede encontrarse dispersos a lo largo del dominio computacional completo o sobre una ventana de proyección. El modelo ROM obtenido es tanto computacionalmente eficiente (en aquellos casos que no presentan ondas de choque reconstruir los campos fluidos requiere menos del 1% del tiempo necesario para calcular una solución CFD) como flexible (la ventana de proyección puede escogerse de forma que evite contener regiones con errores en la solución CFD localizados y grandes). Además, en problemas aerodinámicos, los modos POD se obtienen de un conjunto de snapshots calculados utilizando un código CFD basado en la versión compresible de las ecuaciones de Navier Stokes y un modelo de turbulencia (el cual puede incluir algunos términos estabilizadores sin sentido físico que se añaden por razones puramente numéricas), aunque la proyección en la variedad POD se hace utilizando las ecuaciones de Euler, lo que hace al método independiente del esquema utilizado en el código CFD. Además, las ondas de choque se tratan específicamente en la descripción POD para evitar la necesidad de utilizar un número demasiado grande de snapshots. Varias definiciones del residual se discuten, así como el número y distribución de los snapshots,el número de modos retenidos y el efecto de los errores debidos al CFD. El método se comprueba y discute para varios problemas de evaluación que describen (i) la transferencia de calor en la región de recirculación aguas abajo de un escalón, (ii) el flujo alrededor de un perfil bidimensional en regímenes subsónico y transónico y (iii) el flujo alrededor de un estabilizador horizontal tridimensional. El método es tanto eficiente como numéricamente robusto en el sentido de que el esfuerzo computacional es muy pequeño comparado con el requerido por el CFD y los resultados son razonablemente precisos y muy insensibles a la definición del residual, los errores debidos al CFD y al método CFD en sí mismo, el cual puede contener términos estabilizadores artificiales. Por lo tanto, el método puede utilizarse en aplicaciones prácticas de ingeniería.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear three-dimensional modal instability of steady laminar two-dimensional states developing in a lid-driven cavity of isosceles triangular cross-section is investigated theoretically and experimentally for the case in which the equal sides form a rectangular corner. An asymmetric steady two-dimensional motion is driven by the steady motion of one of the equal sides. If the side moves away from the rectangular corner, a stationary three-dimensional instability is found. If the motion is directed towards the corner, the instability is oscillatory. The respective critical Reynolds numbers are identified both theoretically and experimentally. The neutral curves pertinent to the two configurations and the properties of the respective leading eigenmodes are documented and analogies to instabilities in rectangular lid-driven cavities are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The two-dimensional analytic optics design method presented in a previous paper [Opt. Express 20, 5576–5585 (2012)] is extended in this work to the three-dimensional case, enabling the coupling of three ray sets with two free-form lens surfaces. Fermat’s principle is used to deduce additional sets of functional differential equations which make it possible to calculate the lens surfaces. Ray tracing simulations demonstrate the excellent imaging performance of the resulting free-form lenses described by more than 100 coefficients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La presente tesis aborda el estudio sobre los llamados mat buildings, que surgen entre los años cincuenta y sesenta del pasado siglo. Los mat buildings, también llamados “edificios esteras” o “edificios alfombras”, nacen en gran parte como consecuencia de los desacuerdos e insatisfacciones de los CIAM con el reduccionismo funcionalista y los principios de compartimentación funcional. Estos nuevos modelos remplazan el modelo de ciudad entendido como una colección de edificios individuales por una concepción de un patrón urbano. No es la suma de la longitud, la altura y el ancho sino más bien una densa alfombra bi-dimensional, con una configuración de formas que ofrece al mismo tiempo un orden repetitivo y una infinita diversidad de secuencias con infinitas posibilidades de adaptación donde el hombre vive y se desplaza. Estas características que irán apareciendo en la obras de muchos de los arquitectos que forman parte del grupo Team X son los que Alison Smithson empieza a revelar en su artículo, con la ambición de manifestar una nueva sensibilidad y una nueva forma de entender y ver la arquitectura. Los mat buildings y los cluster serán los códigos utilizados por diferentes miembros del Team X para pensar una arquitectura y un urbanismo alternativo al propuesto por los CIAM. Mediante ellos encuentran el camino para una nueva estética de la conexión con un desplazamiento desde una concepción determinista de la forma arquitectónica (una forma cerrada y en general definida a priori) hacia una actitud más libre, más abierta, fundamentada no tanto en la entereza de la forma global sino en cuanto a la intensidad de sus redes internas y de sus diferentes niveles de asociación. La tesis tiene como propósito final cuestionar si esta tipología de edificios, cuyo principio de base es siempre una matriz geométrica abierta (trama, retícula, malla), con crecimiento ilimitado, puede redefinir la frontera entre ciudad y edificio y, por tanto, entre público y privado, individual y colectivo, estructural e infraestructural, permanente y variable. Por ello, se presenta un estudio histórico y crítico en profundidad sobre los mat buildings, analizando detenidamente y por orden cronológico cinco de sus obras más paradigmáticas: el Orfanato en Ámsterdam de Aldo Van Eyck, la Universidad Libre en Berlín de Candilis, Josic y Woods, el Hospital de Venecia de Le Corbusier y Guillermo Jullián de la Fuente, el edificio administrativo de la Centraal Beheer en Apeldoorn de Herman Hertzberger, y por último el MUSAC en León, realizado por Mansilla y Tuñon. Las cuatro primeras obras pertenecen al periodo Team X y son precursoras de muchos otros proyectos que aparecerán a posteriori. La última obra analizada, el MUSAC, es estudiada conjuntamente con algunas obras del arquitecto japonés Sou Fujimoto y otros casos contemporáneos con la intención de manifestar cómo arquitectos de horizontes muy diferentes vuelven a recurrir a estos modelos de crecimientos ilimitados. Mediante el estudio de varios ejemplos contemporáneos se examinan las repercusiones, transformaciones y evoluciones que estos modelos han tenido. La exploración contrastada permite apreciar adecuadamente la pertinencia de estos modelos y los cambios de modalidades y de procesos que advienen con la aparición en el panorama contemporáneo de la noción de campo y los cambios de paradigma que conlleva. Estos nuevos modelos abren nuevos procesos y forma de abordar la arquitectura basada en las relaciones, flujos, movimientos y asociaciones que son caracterizados por diferentes patrones que vienen a alimentar todo el proceso del proyecto arquitectónico. El estudio de estos nuevos modelos nos indica las cualidades que puede ofrecer la revisión de estos métodos para empezar a tratar nuevas cuestiones que hoy en día parecen ser, permanentemente, parte de la condición urbana. XII ABSTRACT This thesis deals with the study of the so-called mat buildings which emerged between the fifties and sixties of the last century. Mat, or carpet, buildings appeared largely as a result of the CIAM’s disagreement and dissatisfaction with functionalist reductionism and the principles of functional compartmentalisation. These new models replaced the model of the city, seen as a collection of individual buildings, with the concept of an urban pattern. It is not the sum of the length, height and width but rather a dense, two- dimensional mat with a configuration of forms offering both a repetitive order and an infinite diversity of sequences with endless possibilities for adaptation, where man lives and circulates. These characteristics, which appeared in the works of many of the architects who formed part of Team X, are those that Alison Smithson started to reveal in her article with the aim of manifesting a new sensibility and a new way of understanding and seeing architecture. Mat buildings and clusters were the codes used by different members of Team X to plan an alternative architecture and urbanism to that proposed by the CIAM. With them, they found the path for a new aesthetic of connection, with a shift from a deterministic concept of the architectural form (closed and generally defined a priori) towards a more free, more open attitude based not so much on the integrity of the overall form but on the intensity of its internal networks and different levels of association. The end purpose of this thesis is to question whether this type of building, the basic principle of which is always an open geometric matrix (grid, recticle, network) with unlimited growth, can redefine the boundary between city and building and, thus, between public and private, individual and collective, structural and infrastructural, and permanent and variable. To this end, an in-depth historical and critical study of mat buildings is presented, analysing carefully and in chronological order five of the most paradigmatic works of this style: the Orphanage in Amsterdam, by Aldo Van Eyck; the Free University of Berlin, by Candilis, Josic and Woods; Venice Hospital, by Le Corbusier and Guillermo Jullián de la Fuente; the Centraal Beheer administration building in Apeldoorn, by Herman Hertzberger; and lastly, the MUSAC (Contemporary Art Museum) in León, designed by Mansilla and Tuñon. The first four works are from the Team X period and were the precursors to many other projects that would appear later. The last work analysed, the MUSAC, is studied together with some works by Japanese architect Sou Fujimoto and other contemporary cases to show how architects with very different perspectives revert to these models of limitless growth. Through the study of several contemporary examples we examine the repercussions, transformations and evolutions these models have had. The contrasted research XIII allows us to properly appreciate the importance of these models and the changes in forms and processes that came with the emergence of the idea of field in the contemporary arena and the paradigm shifts it entailed. These new models opened up new processes and a way of approaching architecture based on relationships, flows, movements and associations characterised by different patterns that feed the entire process of the architectural project. The study of these new models shows us the benefits that a review of these methods can contribute to addressing new issues that today appear to be a permanent part of the urban condition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Output bits from an optical logic cell present noise due to the type of technique used to obtain the Boolean functions of two input data bits. We have simulated the behavior of an optically programmable logic cell working with Fabry Perot-laser diodes of the same type employed in optical communications (1550nm) but working here as amplifiers. We will report in this paper a study of the bit noise generated from the optical non-linearity process allowing the Boolean function operation of two optical input data signals. Two types of optical logic cells will be analyzed. Firstly, a classical "on-off" behavior, with transmission operation of LD amplifier and, secondly, a more complicated configuration with two LD amplifiers, one working on transmission and the other one in reflection mode. This last configuration has nonlinear behavior emulating SEED-like properties. In both cases, depending on the value of a "1" input data signals to be processed, a different logic function can be obtained. Also a CW signal, known as control signal, may be apply to fix the type of logic function. The signal to noise ratio will be analyzed for different parameters, as wavelength signals and the hysteresis cycles regions associated to the device, in relation with the signals power level applied. With this study we will try to obtain a better understanding of the possible effects present on an optical logic gate with Laser Diodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Use of a spherical grid as electron collector at the anodic end of a tether, as recently proposed, is considered. The standard analysis of space-charge limited current to a solid sphere (with neither magnetic nor plasma-motion effects), which has been shown to best fit TSS1R in-orbit results at very high bias, is used to determine effects from grid transparency on current collected; the analysis is first reformulated in the formalism recently introduced in the two-dimensional analysis of bare-tethers. A discussion of the electric potential created by a spherical grid in vacuum is then carried out; it is shown that each grid-wire collects current well below its maximum OML current, the effective grid transparency being close to its optical value. Formulae for the current to a spherical grid, showing the effects of grid transparency, is determined. A fully consistent analysis of electric potential and electron density, outside and inside the grid, is completed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SMS 3D (simultaneous multiple surfaces in their three-dimensional version) is a well-known design method comprising two freeform surfaces that allow the perfect coupling of two wavefronts with another two. The design algorithm provides a collection of line pairs on both surfaces (called SMS spines), whose three-dimensional shape seems arbitrary at first sight. This paper shows that the shapes of the spines are partially governed by applying the étendue conservation theorem to the biparametric bundle of rays linking the paired spines, which is one lesser known étendue invariants found by Poincaré. The resulting formulae for the spines in three-dimensional space happen to coincide with the conventional étendue formulas of two-dimensional geometry, like for instance, the Hottel formula.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crystallization and grain growth technique of thin film silicon are among the most promising methods for improving efficiency and lowering cost of solar cells. A major advantage of laser crystallization and annealing over conventional heating methods is its ability to limit rapid heating and cooling to thin surface layers. Laser energy is used to heat the amorphous silicon thin film, melting it and changing the microstructure to polycrystalline silicon (poly-Si) as it cools. Depending on the laser density, the vaporization temperature can be reached at the center of the irradiated area. In these cases ablation effects are expected and the annealing process becomes ineffective. The heating process in the a-Si thin film is governed by the general heat transfer equation. The two dimensional non-linear heat transfer equation with a moving heat source is solve numerically using the finite element method (FEM), particularly COMSOL Multiphysics. The numerical model help to establish the density and the process speed range needed to assure the melting and crystallization without damage or ablation of the silicon surface. The samples of a-Si obtained by physical vapour deposition were irradiated with a cw-green laser source (Millennia Prime from Newport-Spectra) that delivers up to 15 W of average power. The morphology of the irradiated area was characterized by confocal laser scanning microscopy (Leica DCM3D) and Scanning Electron Microscopy (SEM Hitachi 3000N). The structural properties were studied by micro-Raman spectroscopy (Renishaw, inVia Raman microscope).