979 resultados para first order transition system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

El geoide, definido como la superficie equipotencial que mejor se ajusta (en el sentido de los mínimos cuadrados) al nivel medio del mar en una determinada época, es la superficie que utilizamos como referencia para determinar las altitudes ortométricas. Si disponemos de una superficie equipotencial de referencia como dátum altimétrico preciso o geoide local, podemos entonces determinar las altitudes ortométricas de forma eficiente a partir de las altitudes elipsoidales proporcionadas por el Sistema Global de Navegación por Satélite (Global Navigation Satellite System, GNSS ). Como es sabido uno de los problemas no resueltos de la geodesia (quizás el más importante de los mismos en la actualidad) es la carencia de un dátum altimétrico global (Sjoberg, 2011) con las precisiones adecuadas. Al no existir un dátum altimétrico global que nos permita obtener los valores absolutos de la ondulación del geoide con la precisión requerida, es necesario emplear modelos geopotenciales como alternativa. Recientemente fue publicado el modelo EGM2008 en el que ha habido una notable mejoría de sus tres fuentes de datos, por lo que este modelo contiene coeficientes adicionales hasta el grado 2190 y orden 2159 y supone una sustancial mejora en la precisión (Pavlis et al., 2008). Cuando en una región determinada se dispone de valores de gravedad y Modelos Digitales del Terreno (MDT) de calidad, es posible obtener modelos de superficies geopotenciales más precisos y de mayor resolución que los modelos globales. Si bien es cierto que el Servicio Nacional Geodésico de los Estados Unidos de América (National Geodetic Survey, NGS) ha estado desarrollando modelos del geoide para la región de los Estados Unidos de América continentales y todos sus territorios desde la década de los noventa, también es cierto que las zonas de Puerto Rico y las Islas Vírgenes Estadounidenses han quedado un poco rezagadas al momento de poder aplicar y obtener resultados de mayor precisión con estos modelos regionales del geoide. En la actualidad, el modelo geopotencial regional vigente para la zona de Puerto Rico y las Islas Vírgenes Estadounidenses es el GEOID12A (Roman y Weston, 2012). Dada la necesidad y ante la incertidumbre de saber cuál sería el comportamiento de un modelo del geoide desarrollado única y exclusivamente con datos de gravedad locales, nos hemos dado a la tarea de desarrollar un modelo de geoide gravimétrico como sistema de referencia para las altitudes ortométricas. Para desarrollar un modelo del geoide gravimétrico en la isla de Puerto Rico, fue necesario implementar una metodología que nos permitiera analizar y validar los datos de gravedad terrestre existentes. Utilizando validación por altimetría con sistemas de información geográfica y validación matemática por colocación con el programa Gravsoft (Tscherning et al., 1994) en su modalidad en Python (Nielsen et al., 2012), fue posible validar 1673 datos de anomalías aire libre de un total de 1894 observaciones obtenidas de la base de datos del Bureau Gravimétrico Internacional (BGI). El aplicar estas metodologías nos permitió obtener una base de datos anomalías de la gravedad fiable la cual puede ser utilizada para una gran cantidad de aplicaciones en ciencia e ingeniería. Ante la poca densidad de datos de gravedad existentes, fue necesario emplear un método alternativo para densificar los valores de anomalías aire libre existentes. Empleando una metodología propuesta por Jekeli et al. (2009b) se procedió a determinar anomalías aire libre a partir de los datos de un MDT. Estas anomalías fueron ajustadas utilizando las anomalías aire libre validadas y tras aplicar un ajuste de mínimos cuadrados por zonas geográficas, fue posible obtener una malla de datos de anomalías aire libre uniforme a partir de un MDT. Tras realizar las correcciones topográficas, determinar el efecto indirecto de la topografía del terreno y la contribución del modelo geopotencial EGM2008, se obtuvo una malla de anomalías residuales. Estas anomalías residuales fueron utilizadas para determinar el geoide gravimétrico utilizando varias técnicas entre las que se encuentran la aproximación plana de la función de Stokes y las modificaciones al núcleo de Stokes, propuestas por Wong y Gore (1969), Vanicek y Kleusberg (1987) y Featherstone et al. (1998). Ya determinados los distintos modelos del geoide gravimétrico, fue necesario validar los mismos y para eso se utilizaron una serie de estaciones permanentes de la red de nivelación del Datum Vertical de Puerto Rico de 2002 (Puerto Rico Vertical Datum 2002, PRVD02 ), las cuales tenían publicados sus valores de altitud elipsoidal y elevación. Ante la ausencia de altitudes ortométricas en las estaciones permanentes de la red de nivelación, se utilizaron las elevaciones obtenidas a partir de nivelación de primer orden para determinar los valores de la ondulación del geoide geométrico (Roman et al., 2013). Tras establecer un total de 990 líneas base, se realizaron dos análisis para determinar la 'precisión' de los modelos del geoide. En el primer análisis, que consistió en analizar las diferencias entre los incrementos de la ondulación del geoide geométrico y los incrementos de la ondulación del geoide de los distintos modelos (modelos gravimétricos, EGM2008 y GEOID12A) en función de las distancias entre las estaciones de validación, se encontró que el modelo con la modificación del núcleo de Stokes propuesta por Wong y Gore presentó la mejor 'precisión' en un 91,1% de los tramos analizados. En un segundo análisis, en el que se consideraron las 990 líneas base, se determinaron las diferencias entre los incrementos de la ondulación del geoide geométrico y los incrementos de la ondulación del geoide de los distintos modelos (modelos gravimétricos, EGM2008 y GEOID12A), encontrando que el modelo que presenta la mayor 'precisión' también era el geoide con la modificación del núcleo de Stokes propuesta por Wong y Gore. En este análisis, el modelo del geoide gravimétrico de Wong y Gore presento una 'precisión' de 0,027 metros en comparación con la 'precisión' del modelo EGM2008 que fue de 0,031 metros mientras que la 'precisión' del modelo regional GEOID12A fue de 0,057 metros. Finalmente podemos decir que la metodología aquí presentada es una adecuada ya que fue posible obtener un modelo del geoide gravimétrico que presenta una mayor 'precisión' que los modelos geopotenciales disponibles, incluso superando la precisión del modelo geopotencial global EGM2008. ABSTRACT The geoid, defined as the equipotential surface that best fits (in the least squares sense) to the mean sea level at a particular time, is the surface used as a reference to determine the orthometric heights. If we have an equipotential reference surface or a precise local geoid, we can then determine the orthometric heights efficiently from the ellipsoidal heights, provided by the Global Navigation Satellite System (GNSS). One of the most common and important an unsolved problem in geodesy is the lack of a global altimetric datum (Sjoberg, 2011)) with the appropriate precision. In the absence of one which allows us to obtain the absolute values of the geoid undulation with the required precision, it is necessary to use alternative geopotential models. The EGM2008 was recently published, in which there has been a marked improvement of its three data sources, so this model contains additional coefficients of degree up to 2190 and order 2159, and there is a substantial improvement in accuracy (Pavlis et al., 2008). When a given region has gravity values and high quality digital terrain models (DTM), it is possible to obtain more accurate regional geopotential models, with a higher resolution and precision, than global geopotential models. It is true that the National Geodetic Survey of the United States of America (NGS) has been developing geoid models for the region of the continental United States of America and its territories from the nineties, but which is also true is that areas such as Puerto Rico and the U.S. Virgin Islands have lagged behind when to apply and get more accurate results with these regional geopotential models. Right now, the available geopotential model for Puerto Rico and the U.S. Virgin Islands is the GEOID12A (Roman y Weston, 2012). Given this need and given the uncertainty of knowing the behavior of a regional geoid model developed exclusively with data from local gravity, we have taken on the task of developing a gravimetric geoid model to use as a reference system for orthometric heights. To develop a gravimetric geoid model in the island of Puerto Rico, implementing a methodology that allows us to analyze and validate the existing terrestrial gravity data is a must. Using altimetry validation with GIS and mathematical validation by collocation with the Gravsoft suite programs (Tscherning et al., 1994) in its Python version (Nielsen et al., 2012), it was possible to validate 1673 observations with gravity anomalies values out of a total of 1894 observations obtained from the International Bureau Gravimetric (BGI ) database. Applying these methodologies allowed us to obtain a database of reliable gravity anomalies, which can be used for many applications in science and engineering. Given the low density of existing gravity data, it was necessary to employ an alternative method for densifying the existing gravity anomalies set. Employing the methodology proposed by Jekeli et al. (2009b) we proceeded to determine gravity anomaly data from a DTM. These anomalies were adjusted by using the validated free-air gravity anomalies and, after that, applying the best fit in the least-square sense by geographical area, it was possible to obtain a uniform grid of free-air anomalies obtained from a DTM. After applying the topographic corrections, determining the indirect effect of topography and the contribution of the global geopotential model EGM2008, a grid of residual anomalies was obtained. These residual anomalies were used to determine the gravimetric geoid by using various techniques, among which are the planar approximation of the Stokes function and the modifications of the Stokes kernel, proposed by Wong y Gore (1969), Vanicek y Kleusberg (1987) and Featherstone et al. (1998). After determining the different gravimetric geoid models, it was necessary to validate them by using a series of stations of the Puerto Rico Vertical Datum of 2002 (PRVD02) leveling network. These stations had published its values of ellipsoidal height and elevation, and in the absence of orthometric heights, we use the elevations obtained from first - order leveling to determine the geometric geoid undulation (Roman et al., 2013). After determine a total of 990 baselines, two analyzes were performed to determine the ' accuracy ' of the geoid models. The first analysis was to analyze the differences between the increments of the geometric geoid undulation with the increments of the geoid undulation of the different geoid models (gravimetric models, EGM2008 and GEOID12A) in function of the distance between the validation stations. Through this analysis, it was determined that the model with the modified Stokes kernel given by Wong and Gore had the best 'accuracy' in 91,1% for the analyzed baselines. In the second analysis, in which we considered the 990 baselines, we analyze the differences between the increments of the geometric geoid undulation with the increments of the geoid undulation of the different geoid models (gravimetric models, EGM2008 and GEOID12A) finding that the model with the highest 'accuracy' was also the model with modifying Stokes kernel given by Wong and Gore. In this analysis, the Wong and Gore gravimetric geoid model presented an 'accuracy' of 0,027 meters in comparison with the 'accuracy' of global geopotential model EGM2008, which gave us an 'accuracy' of 0,031 meters, while the 'accuracy ' of the GEOID12A regional model was 0,057 meters. Finally we can say that the methodology presented here is adequate as it was possible to obtain a gravimetric geoid model that has a greater 'accuracy' than the geopotential models available, even surpassing the accuracy of global geopotential model EGM2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SRAM-based FPGAs are in-field reconfigurable an unlimited number of times. This characteristic, together with their high performance and high logic density, proves to be very convenient for a number of ground and space level applications. One drawback of this technology is that it is susceptible to ionizing radiation, and this sensitivity increases with technology scaling. This is a first order concern for applications in harsh radiation environments, and starts to be a concern for high reliability ground applications. Several techniques exist for coping with radiation effects at user application. In order to be effective they need to be complemented with configuration memory scrubbing, which allows error mitigation and prevents failures due to error accumulation. Depending on the radiation environment and on the system dependability requirements, the configuration scrubber design can become more or less complex. This paper classifies and presents current and novel design methodologies and architectures for SRAM-based FPGAs, and in particular for Xilinx Virtex-4QV/5QV, configuration memory scrubbers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. In addition to recording TOD, the cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also identified for use as the independent variables in the regression analysis. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajectory parame- ters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowledge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los lenguajes de programación son el idioma que los programadores usamos para comunicar a los computadores qué queremos que hagan. Desde el lenguaje ensamblador, que traduce una a una las instrucciones que interpreta un computador hasta lenguajes de alto nivel, se ha buscado desarrollar lenguajes más cercanos a la forma de pensar y expresarse de los humanos. Los lenguajes de programación lógicos como Prolog utilizan a su vez el lenguaje de la lógica de 1er orden de modo que el programador puede expresar las premisas del problema que se quiere resolver sin preocuparse del cómo se va a resolver dicho problema. La resolución del problema se equipara a encontrar una deducción del objetivo a alcanzar a partir de las premisas y equivale a lo que entendemos por la ejecución de un programa. Ciao es una implementación de Prolog (http://www.ciao-lang.org) y utiliza el método de resolución SLD, que realiza el recorrido de los árboles de decisión en profundidad(depth-first) lo que puede derivar en la ejecución de una rama de busqueda infinita (en un bucle infinito) sin llegar a dar respuestas. Ciao, al ser un sistema modular, permite la utilización de extensiones para implementar estrategias de resolución alternativas como la tabulación (OLDT). La tabulación es un método alternativo que se basa en memorizar las llamadas realizadas y sus respuestas para no repetir llamadas y poder usar las respuestas sin recomputar las llamadas. Algunos programas que con SLD entran en un bucle infinito, gracias a la tabulación dán todas las respuestas y termina. El modulo tabling es una implementación de tabulación mediante el algoritmo CHAT. Esta implementación es una versión beta que no tiene implementado un manejador de memoria. Entendemos que la gestión de memoria en el módulo de tabling tiene gran importancia, dado que la resolución con tabulación permite reducir el tiempo de computación (al no repetir llamadas), aumentando los requerimientos de memoria (para guardar las llamadas y las respuestas). Por lo tanto, el objetivo de este trabajo es implementar un mecanismo de gestión de la memoria en Ciao con el módulo tabling cargado. Para ello se ha realizado la implementación de: Un mecanismo de captura de errores que: detecta cuando el computador se queda sin memoria y activa la reinicialización del sitema. Un procedimiento que ajusta los punteros del modulo de tabling que apuntan a la WAM tras un proceso de realojo de algunas de las áreas de memoria de la WAM. Un gestor de memoria del modulo de tabling que detecta c realizar una ampliación de las áreas de memoria del modulo de tabling, realiza la solicitud de más memoria y realiza el ajuste de los punteros. Para ayudar al lector no familiarizado con este tema, describimos los datos que Ciao y el módulo de tabling alojan en las áreas de memoria dinámicas que queremos gestionar. Los casos de pruebas desarrollados para evaluar la implementación del gestor de memoria, ponen de manifiesto que: Disponer de un gestor de memoria dinámica permite la ejecución de programas en un mayor número de casos. La política de gestión de memoria incide en la velocidad de ejecución de los programas. ---ABSTRACT---Programming languages are the language that programmers use in order to communicate to computers what we want them to do. Starting from the assembly language, which translates one by one the instructions to the computer, and arriving to highly complex languages, programmers have tried to develop programming languages that resemble more closely the way of thinking and communicating of human beings. Logical programming languages, such as Prolog, use the language of logic of the first order so that programmers can express the premise of the problem that they want to solve without having to solve the problem itself. The solution to the problem is equal to finding a deduction of the objective to reach starting from the premises and corresponds to what is usually meant as the execution of a program. Ciao is an implementation of Prolog (http://www.ciao-lang.org) and uses the method of resolution SLD that carries out the path of the decision trees in depth (depth-frist). This can cause the execution of an infinite searching branch (an infinite loop) without getting to an answer. Since Ciao is a modular system, it allows the use of extensions to implement alternative resolution strategies, such as tabulation (OLDT). Tabulation is an alternative method that is based on the memorization of executions and their answers, in order to avoid the repetition of executions and to be able to use the answers without reexecutions. Some programs that get into an infinite loop with SLD are able to give all the answers and to finish thanks to tabulation. The tabling package is an implementation of tabulation through the algorithm CHAT. This implementation is a beta version which does not present a memory handler. The management of memory in the tabling package is highly important, since the solution with tabulation allows to reduce the system time (because it does not repeat executions) and increases the memory requirements (in order to save executions and answers). Therefore, the objective of this work is to implement a memory management mechanism in Ciao with the tabling package loaded. To achieve this goal, the following implementation were made: An error detection system that reveals when the computer is left without memory and activate the reinizialitation of the system. A procedure that adjusts the pointers of the tabling package which points to the WAM after a process of realloc of some of the WAM memory stacks. A memory manager of the tabling package that detects when it is necessary to expand the memory stacks of the tabling package, requests more memory, and adjusts the pointers. In order to help the readers who are not familiar with this topic, we described the data which Ciao and the tabling package host in the dynamic memory stacks that we want to manage. The test cases developed to evaluate the implementation of the memory manager show that: A manager for the dynamic memory allows the execution of programs in a larger number of cases. Memory management policy influences the program execution speed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a simplified system of a growing colony of cells described as a free boundary problem. The system consists of two hyperbolic equations of first order coupled to an ODE to describe the behavior of the boundary. The system for cell populations includes non-local terms of integral type in the coefficients. By introducing a comparison with solutions of an ODE's system, we show that there exists a unique homogeneous steady state which is globally asymptotically stable for a range of parameters under the assumption of radially symmetric initial data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a simple mathematical model of tumor growth based on cancer stem cells. The model consists of four hyperbolic equations of first order to describe the evolution of different subpopulations of cells: cancer stem cells, progenitor cells, differentiated cells and dead cells. A fifth equation is introduced to model the evolution of the moving boundary. The system includes non-local terms of integral type in the coefficients. Under some restrictions in the parameters we show that there exists a unique homogeneous steady state which is stable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In motion standstill, a quickly moving object appears to stand still, and its details are clearly visible. It is proposed that motion standstill can occur when the spatiotemporal resolution of the shape and color systems exceeds that of the motion systems. For moving red-green gratings, the first- and second-order motion systems fail when the grating is isoluminant. The third-order motion system fails when the green/red saturation ratio produces isosalience (equal distinctiveness of red and green). When a variety of high-contrast red-green gratings, with different spatial frequencies and speeds, were made isoluminant and isosalient, the perception of motion standstill was so complete that motion direction judgments were at chance levels. Speed ratings also indicated that, within a narrow range of luminance contrasts and green/red saturation ratios, moving stimuli were perceived as absolutely motionless. The results provide further evidence that isoluminant color motion is perceived only by the third-order motion system, and they have profound implications for the nature of shape and color perception.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiple brain maps are commonly found in virtually every vertebrate sensory system. Although their functional significance is generally relatively little understood, they seem to specialize in processing distinct sensory parameters. Nevertheless, to yield the stimulus features that ultimately elicit the adaptive behavior, it appears that information streams have to be combined across maps. Results from current lesion experiments in the electrosensory system, however, suggest an alternative possibility. Inactivations of different maps of the first-order electrosensory nucleus in electric fish, the electrosensory lateral line lobe, resulted in markedly different behavioral deficits. The centromedial map is both necessary and sufficient for a particular electrolocation behavior, the jamming avoidance response, whereas it does not affect the communicative response to external electric signals. Conversely, the lateral map does not affect the jamming avoidance response but is necessary and sufficient to evoke communication behavior. Because the premotor pathways controlling the two behaviors in these fish appear to be separated as well, this system illustrates that sensory–motor control of different behaviors can occur in strictly segregated channels from the sensory input of the brain all through to its motor output. This might reflect an early evolutionary stage where multiplication of brain maps can satisfy the demand on processing a wider range of sensory signals ensuing from an enlarged behavioral repertoire, and bridging across maps is not yet required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intracellular Ca2+ receptor calmodulin (CaM) coordinates responses to extracellular stimuli by modulating the activities of its various binding proteins. Recent reports suggest that, in addition to its familiar functions in the cytoplasm, CaM may be directly involved in rapid signaling between cytoplasm and nucleus. Here we show that Ca2+-dependent nuclear accumulation of CaM can be reconstituted in permeabilized cells. Accumulation was blocked by M13, a CaM antagonist peptide, but did not require cytosolic factors or an ATP regenerating system. Ca2+-dependent influx of CaM into nuclei was not blocked by inhibitors of nuclear localization signal-mediated nuclear import in either permeabilized or intact cells. Fluorescence recovery after photobleaching studies of CaM in intact cells showed that influx is a first-order process with a rate constant similar to that of a freely diffusible control molecule (20-kDa dextran). Studies of CaM efflux from preloaded nuclei in permeablized cells revealed the existence of three classes of nuclear binding sites that are distinguished by their Ca2+-dependence and affinity. At high [Ca2+], efflux was enhanced by addition of a high affinity CaM-binding protein outside the nucleus. These data suggest that CaM diffuses freely through nuclear pores and that CaM-binding proteins in the nucleus act as a sink for Ca2+-CaM, resulting in accumulation of CaM in the nucleus on elevation of intracellular free Ca2+.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Immobilized single horseradish peroxidase enzymes were observed by confocal fluorescence spectroscopy during catalysis of the oxidation reaction of the nonfluorescent dihydrorhodamine 6G substrate into the highly fluorescent product rhodamine 6G. By extracting only the non-Markovian behavior of the spectroscopic two-state process of enzyme-product complex formation and release, memory landscapes were generated for single-enzyme molecules. The memory landscapes can be used to discriminate between different origins of stretched exponential kinetics that are found in the first-order correlation analysis. Memory landscapes of single-enzyme data shows oscillations that are expected in a single-enzyme system that possesses a set of transient states. Alternative origins of the oscillations may not, however, be ruled out. The data and analysis indicate that substrate interaction with the enzyme selects a set of conformational substates for which the enzyme is active.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of the large-sample distribution of the canonical correlations and variates in cointegrated models is extended from the first-order autoregression model to autoregression of any (finite) order. The cointegrated process considered here is nonstationary in some dimensions and stationary in some other directions, but the first difference (the “error-correction form”) is stationary. The asymptotic distribution of the canonical correlations between the first differences and the predictor variables as well as the corresponding canonical variables is obtained under the assumption that the process is Gaussian. The method of analysis is similar to that used for the first-order process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of the new extragalactic deuterium observations, Big Bang nucleosynthesis (BBN) is on the verge of undergoing a transformation. In the past, the emphasis has been on demonstrating the concordance of the BBN model with the abundances of the light isotopes extrapolated back to their primordial values by using stellar and galactic evolution theories. As a direct measure of primordial deuterium is converged upon, the nature of the field will shift to using the much more precise primordial D/H to constrain the more flexible stellar and galactic evolution models (although the question of potential systematic error in 4He abundance determinations remains open). The remarkable success of the theory to date in establishing the concordance has led to the very robust conclusion of BBN regarding the baryon density. This robustness remains even through major model variations such as an assumed first-order quark-hadron phase transition. The BBN constraints on the cosmological baryon density are reviewed and demonstrate that the bulk of the baryons are dark and also that the bulk of the matter in the universe is nonbaryonic. Comparison of baryonic density arguments from Lyman-α clouds, x-ray gas in clusters, and the microwave anisotropy are made.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dedicated mission to investigate exoplanetary atmospheres represents a major milestone in our quest to understand our place in the universe by placing our Solar System in context and by addressing the suitability of planets for the presence of life. EChO—the Exoplanet Characterisation Observatory—is a mission concept specifically geared for this purpose. EChO will provide simultaneous, multi-wavelength spectroscopic observations on a stable platform that will allow very long exposures. The use of passive cooling, few moving parts and well established technology gives a low-risk and potentially long-lived mission. EChO will build on observations by Hubble, Spitzer and ground-based telescopes, which discovered the first molecules and atoms in exoplanetary atmospheres. However, EChO’s configuration and specifications are designed to study a number of systems in a consistent manner that will eliminate the ambiguities affecting prior observations. EChO will simultaneously observe a broad enough spectral region—from the visible to the mid-infrared—to constrain from one single spectrum the temperature structure of the atmosphere, the abundances of the major carbon and oxygen bearing species, the expected photochemically-produced species and magnetospheric signatures. The spectral range and resolution are tailored to separate bands belonging to up to 30 molecules and retrieve the composition and temperature structure of planetary atmospheres. The target list for EChO includes planets ranging from Jupiter-sized with equilibrium temperatures T_ eq up to 2,000 K, to those of a few Earth masses, with T _eq \u223c 300 K. The list will include planets with no Solar System analog, such as the recently discovered planets GJ1214b, whose density lies between that of terrestrial and gaseous planets, or the rocky-iron planet 55 Cnc e, with day-side temperature close to 3,000 K. As the number of detected exoplanets is growing rapidly each year, and the mass and radius of those detected steadily decreases, the target list will be constantly adjusted to include the most interesting systems. We have baselined a dispersive spectrograph design covering continuously the 0.4–16 μm spectral range in 6 channels (1 in the visible, 5 in the InfraRed), which allows the spectral resolution to be adapted from several tens to several hundreds, depending on the target brightness. The instrument will be mounted behind a 1.5 m class telescope, passively cooled to 50 K, with the instrument structure and optics passively cooled to \u223c45 K. EChO will be placed in a grand halo orbit around L2. This orbit, in combination with an optimised thermal shield design, provides a highly stable thermal environment and a high degree of visibility of the sky to observe repeatedly several tens of targets over the year. Both the baseline and alternative designs have been evaluated and no critical items with Technology Readiness Level (TRL) less than 4–5 have been identified. We have also undertaken a first-order cost and development plan analysis and find that EChO is easily compatible with the ESA M-class mission framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fact that fast oscillating homogeneous scalar fields behave as perfect fluids in average and their intrinsic isotropy have made these models very fruitful in cosmology. In this work we will analyse the perturbations dynamics in these theories assuming general power law potentials V(ϕ) = λ|ϕ|^n /n. At leading order in the wavenumber expansion, a simple expression for the effective sound speed of perturbations is obtained c_eff^ 2  = ω = (n − 2)/(n + 2) with ω the effective equation of state. We also obtain the first order correction in k^ 2/ω_eff^ 2 , when the wavenumber k of the perturbations is much smaller than the background oscillation frequency, ω_eff. For the standard massive case we have also analysed general anharmonic contributions to the effective sound speed. These results are reached through a perturbed version of the generalized virial theorem and also studying the exact system both in the super-Hubble limit, deriving the natural ansatz for δϕ; and for sub-Hubble modes, exploiting Floquet’s theorem.