914 resultados para first order transition system
Resumo:
We consider a simplified system of a growing colony of cells described as a free boundary problem. The system consists of two hyperbolic equations of first order coupled to an ODE to describe the behavior of the boundary. The system for cell populations includes non-local terms of integral type in the coefficients. By introducing a comparison with solutions of an ODE's system, we show that there exists a unique homogeneous steady state which is globally asymptotically stable for a range of parameters under the assumption of radially symmetric initial data.
Resumo:
We consider a simple mathematical model of tumor growth based on cancer stem cells. The model consists of four hyperbolic equations of first order to describe the evolution of different subpopulations of cells: cancer stem cells, progenitor cells, differentiated cells and dead cells. A fifth equation is introduced to model the evolution of the moving boundary. The system includes non-local terms of integral type in the coefficients. Under some restrictions in the parameters we show that there exists a unique homogeneous steady state which is stable.
Resumo:
Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.
Resumo:
In motion standstill, a quickly moving object appears to stand still, and its details are clearly visible. It is proposed that motion standstill can occur when the spatiotemporal resolution of the shape and color systems exceeds that of the motion systems. For moving red-green gratings, the first- and second-order motion systems fail when the grating is isoluminant. The third-order motion system fails when the green/red saturation ratio produces isosalience (equal distinctiveness of red and green). When a variety of high-contrast red-green gratings, with different spatial frequencies and speeds, were made isoluminant and isosalient, the perception of motion standstill was so complete that motion direction judgments were at chance levels. Speed ratings also indicated that, within a narrow range of luminance contrasts and green/red saturation ratios, moving stimuli were perceived as absolutely motionless. The results provide further evidence that isoluminant color motion is perceived only by the third-order motion system, and they have profound implications for the nature of shape and color perception.
Resumo:
Multiple brain maps are commonly found in virtually every vertebrate sensory system. Although their functional significance is generally relatively little understood, they seem to specialize in processing distinct sensory parameters. Nevertheless, to yield the stimulus features that ultimately elicit the adaptive behavior, it appears that information streams have to be combined across maps. Results from current lesion experiments in the electrosensory system, however, suggest an alternative possibility. Inactivations of different maps of the first-order electrosensory nucleus in electric fish, the electrosensory lateral line lobe, resulted in markedly different behavioral deficits. The centromedial map is both necessary and sufficient for a particular electrolocation behavior, the jamming avoidance response, whereas it does not affect the communicative response to external electric signals. Conversely, the lateral map does not affect the jamming avoidance response but is necessary and sufficient to evoke communication behavior. Because the premotor pathways controlling the two behaviors in these fish appear to be separated as well, this system illustrates that sensory–motor control of different behaviors can occur in strictly segregated channels from the sensory input of the brain all through to its motor output. This might reflect an early evolutionary stage where multiplication of brain maps can satisfy the demand on processing a wider range of sensory signals ensuing from an enlarged behavioral repertoire, and bridging across maps is not yet required.
Resumo:
The intracellular Ca2+ receptor calmodulin (CaM) coordinates responses to extracellular stimuli by modulating the activities of its various binding proteins. Recent reports suggest that, in addition to its familiar functions in the cytoplasm, CaM may be directly involved in rapid signaling between cytoplasm and nucleus. Here we show that Ca2+-dependent nuclear accumulation of CaM can be reconstituted in permeabilized cells. Accumulation was blocked by M13, a CaM antagonist peptide, but did not require cytosolic factors or an ATP regenerating system. Ca2+-dependent influx of CaM into nuclei was not blocked by inhibitors of nuclear localization signal-mediated nuclear import in either permeabilized or intact cells. Fluorescence recovery after photobleaching studies of CaM in intact cells showed that influx is a first-order process with a rate constant similar to that of a freely diffusible control molecule (20-kDa dextran). Studies of CaM efflux from preloaded nuclei in permeablized cells revealed the existence of three classes of nuclear binding sites that are distinguished by their Ca2+-dependence and affinity. At high [Ca2+], efflux was enhanced by addition of a high affinity CaM-binding protein outside the nucleus. These data suggest that CaM diffuses freely through nuclear pores and that CaM-binding proteins in the nucleus act as a sink for Ca2+-CaM, resulting in accumulation of CaM in the nucleus on elevation of intracellular free Ca2+.
Resumo:
Immobilized single horseradish peroxidase enzymes were observed by confocal fluorescence spectroscopy during catalysis of the oxidation reaction of the nonfluorescent dihydrorhodamine 6G substrate into the highly fluorescent product rhodamine 6G. By extracting only the non-Markovian behavior of the spectroscopic two-state process of enzyme-product complex formation and release, memory landscapes were generated for single-enzyme molecules. The memory landscapes can be used to discriminate between different origins of stretched exponential kinetics that are found in the first-order correlation analysis. Memory landscapes of single-enzyme data shows oscillations that are expected in a single-enzyme system that possesses a set of transient states. Alternative origins of the oscillations may not, however, be ruled out. The data and analysis indicate that substrate interaction with the enzyme selects a set of conformational substates for which the enzyme is active.
Resumo:
The study of the large-sample distribution of the canonical correlations and variates in cointegrated models is extended from the first-order autoregression model to autoregression of any (finite) order. The cointegrated process considered here is nonstationary in some dimensions and stationary in some other directions, but the first difference (the “error-correction form”) is stationary. The asymptotic distribution of the canonical correlations between the first differences and the predictor variables as well as the corresponding canonical variables is obtained under the assumption that the process is Gaussian. The method of analysis is similar to that used for the first-order process.
Resumo:
With the advent of the new extragalactic deuterium observations, Big Bang nucleosynthesis (BBN) is on the verge of undergoing a transformation. In the past, the emphasis has been on demonstrating the concordance of the BBN model with the abundances of the light isotopes extrapolated back to their primordial values by using stellar and galactic evolution theories. As a direct measure of primordial deuterium is converged upon, the nature of the field will shift to using the much more precise primordial D/H to constrain the more flexible stellar and galactic evolution models (although the question of potential systematic error in 4He abundance determinations remains open). The remarkable success of the theory to date in establishing the concordance has led to the very robust conclusion of BBN regarding the baryon density. This robustness remains even through major model variations such as an assumed first-order quark-hadron phase transition. The BBN constraints on the cosmological baryon density are reviewed and demonstrate that the bulk of the baryons are dark and also that the bulk of the matter in the universe is nonbaryonic. Comparison of baryonic density arguments from Lyman-α clouds, x-ray gas in clusters, and the microwave anisotropy are made.
Resumo:
A dedicated mission to investigate exoplanetary atmospheres represents a major milestone in our quest to understand our place in the universe by placing our Solar System in context and by addressing the suitability of planets for the presence of life. EChO—the Exoplanet Characterisation Observatory—is a mission concept specifically geared for this purpose. EChO will provide simultaneous, multi-wavelength spectroscopic observations on a stable platform that will allow very long exposures. The use of passive cooling, few moving parts and well established technology gives a low-risk and potentially long-lived mission. EChO will build on observations by Hubble, Spitzer and ground-based telescopes, which discovered the first molecules and atoms in exoplanetary atmospheres. However, EChO’s configuration and specifications are designed to study a number of systems in a consistent manner that will eliminate the ambiguities affecting prior observations. EChO will simultaneously observe a broad enough spectral region—from the visible to the mid-infrared—to constrain from one single spectrum the temperature structure of the atmosphere, the abundances of the major carbon and oxygen bearing species, the expected photochemically-produced species and magnetospheric signatures. The spectral range and resolution are tailored to separate bands belonging to up to 30 molecules and retrieve the composition and temperature structure of planetary atmospheres. The target list for EChO includes planets ranging from Jupiter-sized with equilibrium temperatures T_ eq up to 2,000 K, to those of a few Earth masses, with T _eq \u223c 300 K. The list will include planets with no Solar System analog, such as the recently discovered planets GJ1214b, whose density lies between that of terrestrial and gaseous planets, or the rocky-iron planet 55 Cnc e, with day-side temperature close to 3,000 K. As the number of detected exoplanets is growing rapidly each year, and the mass and radius of those detected steadily decreases, the target list will be constantly adjusted to include the most interesting systems. We have baselined a dispersive spectrograph design covering continuously the 0.4–16 μm spectral range in 6 channels (1 in the visible, 5 in the InfraRed), which allows the spectral resolution to be adapted from several tens to several hundreds, depending on the target brightness. The instrument will be mounted behind a 1.5 m class telescope, passively cooled to 50 K, with the instrument structure and optics passively cooled to \u223c45 K. EChO will be placed in a grand halo orbit around L2. This orbit, in combination with an optimised thermal shield design, provides a highly stable thermal environment and a high degree of visibility of the sky to observe repeatedly several tens of targets over the year. Both the baseline and alternative designs have been evaluated and no critical items with Technology Readiness Level (TRL) less than 4–5 have been identified. We have also undertaken a first-order cost and development plan analysis and find that EChO is easily compatible with the ESA M-class mission framework.
Resumo:
The fact that fast oscillating homogeneous scalar fields behave as perfect fluids in average and their intrinsic isotropy have made these models very fruitful in cosmology. In this work we will analyse the perturbations dynamics in these theories assuming general power law potentials V(ϕ) = λ|ϕ|^n /n. At leading order in the wavenumber expansion, a simple expression for the effective sound speed of perturbations is obtained c_eff^ 2 = ω = (n − 2)/(n + 2) with ω the effective equation of state. We also obtain the first order correction in k^ 2/ω_eff^ 2 , when the wavenumber k of the perturbations is much smaller than the background oscillation frequency, ω_eff. For the standard massive case we have also analysed general anharmonic contributions to the effective sound speed. These results are reached through a perturbed version of the generalized virial theorem and also studying the exact system both in the super-Hubble limit, deriving the natural ansatz for δϕ; and for sub-Hubble modes, exploiting Floquet’s theorem.
Resumo:
O conceito de controle híbrido é aplicado à operação de alívio entre um FPWSO e um navio aliviador. Ambos os navios mantêm suas posições e aproamentos pelo resultado da ação do seu Sistema de Posicionamento Dinâmico (SPD). O alívio dura cerca de 24 horas para ser concluído. Durante este período, o estado de mar pode se alterar e os calados estão sendo constantemente alterados. Um controlador híbrido é projetado para permitir modificacões dos parâmetros de controle/observação se alguma alteração significante do estado de mar e/ou calado das embarcações ocorrer. O principal objetivo dos controladores é manter o posicionamento relativo entre os navios com o intuito de evitar perigosa proximidade ou excesso de tensão no cabo. Com isto em mente, uma nova estratégia de controle que atue integradamente em ambos os navios é proposta baseda em geometria diferencial. Observadores não lineares baseados em passividade são aplicados para estimar a posição, a velocidade e as forças externas de mares calmos até extremos. O critério para troca do controle/observação é baseado na variação do calado e no estado de mar. O calado é assumido conhecido e o estado de mar é estimado pela frequência de pico do espectro do movimento de primeira ordem dos navios. Um modelo de perturbação é proposto para encontrar o número de controladores do sistema híbrido. A equivalência entre o controle geométrico e o controlador baseado em Multiplicadores de Lagrange é demonstrada. Assumindo algumas hipóteses, a equivalência entre os controladores geométrico e o PD é também apresentada. O desempenho da nova estratégia é avaliada por meio de simulações numéricas e comparada a um controlador PD. Os resultados apresentam muito bom desempenho em função do objetivo proposto. A comparação entre a abordagem geométrica e o controlador PD aponta um desempenho muito parecido entre eles.
Resumo:
Impure systems contain Objects and Subjects: Subjects are human beings. We can distinguish a person as an observer (subjectively outside the system) and that by definition is the Subject himself, and part of the system. In this case he acquires the category of object. Objects (relative beings) are significances, which are the consequence of perceptual beliefs on the part of the Subject about material or energetic objects (absolute beings) with certain characteristics.The IS (Impure System) approach is as follows: Objects are perceptual significances (relative beings) of material or energetic objects (absolute beings). The set of these objects will form an impure set of the first order. The existing relations between these relative objects will be of two classes: transactions of matter and/or energy and inferential relations. Transactions can have alethic modality: necessity, possibility, impossibility and contingency. Ontic existence of possibility entails that inferential relations have Deontic modality: obligation, permission, prohibition, faculty and analogy. We distinguished between theorems (natural laws) and norms (ethical, legislative and customary rules of conduct).
Resumo:
In this paper, the authors extend and generalize the methodology based on the dynamics of systems with the use of differential equations as equations of state, allowing that first order transformed functions not only apply to the primitive or original variables, but also doing so to more complex expressions derived from them, and extending the rules that determine the generation of transformed superior to zero order (variable or primitive). Also, it is demonstrated that for all models of complex reality, there exists a complex model from the syntactic and semantic point of view. The theory is exemplified with a concrete model: MARIOLA model.
Resumo:
This Policy Contribution...discusses how Europe's financial system could and should be reshaped. It starts from two basic points: First, the banking system needs to be credibly de-linked from the sovereigns and banks should operate across borders. Europe needs fewer national champions. Second, other forms of financial intermediation need to be developed. Both steps require a significant stepping up of the policy system, including a single resolution mechanism. Together, this will render Europe’s financial system more stable, more efficient and more conducive to growth.