869 resultados para traffic modelling and simulation. video processing
Resumo:
In this work, a methodology is proposed to find the dynamics poles of a capacitive pressure transmitter in order to enhance and extend the on line surveillance of this type of sensors based on the response time measurement by applying noise analysis techniques and the Dynamic Data System. Several measurements have been analyzed taken from a Pressurized Water Reactor. The methodology proposes an autoregressive fit whose order is determined by the sensor dynamics poles. Nevertheless, the signals that have been analyzed, could not be filtered properly in order to remove the plant noise, thus, this was considered as an additional pair of complex conjugate poles. With this methodology we have come up with the numerical value of the sensor second real pole in spite of its low influence on the sensor dynamic response. This opens up a more accurate on line sensor surveillance since the previous methods were achieved by considering one real pole only.
Resumo:
The gust wind tunnel at IDR, Universidad Politécnica de Madrid (UPM), has been enhanced and the impact of the modification has been characterized. Several flow quality configurations have been tested. The problems in measuring gusty winds with Pitot tubes have been considered. Experimental results have been obtained and compared with theoretically calculated results (based on potential flow theory). A theoretical correction term has been proposed for unsteady flow measurements obtained with Pitot tubes. The effect of unsteady flow on structures and laying bodies on the ground has been also considered. A theoretical model has been proposed for a semi-circular cylinder and experimental tests have been performed to study the unsteady flow effects, which can help in clarifying the phenomenon.
Resumo:
In the current uncertain context that affects both the world economy and the energy sector, with the rapid increase in the prices of oil and gas and the very unstable political situation that affects some of the largest raw materials’ producers, there is a need for developing efficient and powerful quantitative tools that allow to model and forecast fossil fuel prices, CO2 emission allowances prices as well as electricity prices. This will improve decision making for all the agents involved in energy issues. Although there are papers focused on modelling fossil fuel prices, CO2 prices and electricity prices, the literature is scarce on attempts to consider all of them together. This paper focuses on both building a multivariate model for the aforementioned prices and comparing its results with those of univariate ones, in terms of prediction accuracy (univariate and multivariate models are compared for a large span of days, all in the first 4 months in 2011) as well as extracting common features in the volatilities of the prices of all these relevant magnitudes. The common features in volatility are extracted by means of a conditionally heteroskedastic dynamic factor model which allows to solve the curse of dimensionality problem that commonly arises when estimating multivariate GARCH models. Additionally, the common volatility factors obtained are useful for improving the forecasting intervals and have a nice economical interpretation. Besides, the results obtained and methodology proposed can be useful as a starting point for risk management or portfolio optimization under uncertainty in the current context of energy markets.
Resumo:
Delamination reduces the strenght of the composites, mainly in compression. Several methods exist to overcome this problem, but they are either not feasible for large scale production or too expensive. 3D composites are a promising solution.
Resumo:
The new reactor concepts proposed in the Generation IV International Forum (GIF) are conceived to improve the use of natural resources, reduce the amount of high-level radioactive waste and excel in their reliability and safe operation. Among these novel designs sodium fast reactors (SFRs) stand out due to their technological feasibility as demonstrated in several countries during the last decades. As part of the contribution of EURATOM to GIF the CP-ESFR is a collaborative project with the objective, among others, to perform extensive analysis on safety issues involving renewed SFR demonstrator designs. The verification of computational tools able to simulate the plant behaviour under postulated accidental conditions by code-to-code comparison was identified as a key point to ensure reactor safety. In this line, several organizations employed coupled neutronic and thermal-hydraulic system codes able to simulate complex and specific phenomena involving multi-physics studies adapted to this particular fast reactor technology. In the “Introduction” of this paper the framework of this study is discussed, the second section describes the envisaged plant design and the commonly agreed upon modelling guidelines. The third section presents a comparative analysis of the calculations performed by each organisation applying their models and codes to a common agreed transient with the objective to harmonize the models as well as validating the implementation of all relevant physical phenomena in the different system codes.
Resumo:
In the smart building control industry, creating a platform to integrate different communication protocols and ease the interaction between users and devices is becoming increasingly important. BATMP is a platform designed to achieve this goal. In this paper, the authors describe a novel mechanism for information exchange, which introduces a new concept, Parameter, and uses it as the common object among all the BATMP components: Gateway Manager, Technology Manager, Application Manager, Model Manager and Data Warehouse. Parameter is an object which represents a physical magnitude and contains the information about its presentation, available actions, access type, etc. Each component of BATMP has a copy of the parameters. In the Technology Manager, three drivers for different communication protocols, KNX, CoAP and Modbus, are implemented to convert devices into parameters. In the Gateway Manager, users can control the parameters directly or by defining a scenario. In the Application Manager, the applications can subscribe to parameters and decide the values of parameters by negotiating. Finally, a Negotiator is implemented in the Model Manager to notify other components about the changes taking place in any component. By applying this mechanism, BATMP ensures the simultaneous and concurrent communication among users, applications and devices.
Resumo:
Nowadays, HTTP adaptive streaming (HAS) has become a reliable distribution technology offering significant advantages in terms of both user perceived Quality of Experience (QoE) and resource utilization for content and network service providers. By trading-off the video quality, HAS is able to adapt to the available bandwidth and display requirements so that it can deliver the video content to a variety of devices over the Internet. However, until now there is not enough knowledge of how the adaptation techniques affect the end user's visual experience. Therefore, this paper presents a comparative analysis of different bitrate adaptation strategies in adaptive streaming of monoscopic and stereoscopic video. This has been done through a subjective experiment of testing the end-user response to the video quality variations, considering the visual comfort issue. The experimental outcomes have made a good insight into the factors that can influence on the QoE of different adaptation strategies.
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.
Resumo:
This article presents the first musculoskeletal model and simulation of upper plexus brachial injury. From this model is possible to analyse forces and movement ranges in order to develop a robotic exoskeleton to improve rehabilitation. The software that currently exists for musculoskeletal modeling is varied and most have advanced features for proper analysis and study of motion simulations. Whilst more powerful computer packages are usually expensive, there are other free and open source packages available which offer different tools to perform animations and simulations and which obtain forces and moments of inertia. Among them, Musculoskeletal Modeling Software was selected to construct a model of the upper limb, which has 7 degrees of freedom and 10 muscles. These muscles are important for two of the movements simulated in this article that are part of the post-surgery rehabilitation protocol. We performed different movement animations which are made using the inertial measurement unit to capture real data from movements made by a human being. We also performed the simulation of forces produced in elbow flexion-extension and arm abduction-adduction of a healthy subject and one with upper brachial plexus injury in a postoperative state to compare the force that is capable of being produced in both cases.
Resumo:
Mouse CD1(mCD1) molecules have been reported to present two types of antigens: peptides or proteins and the glycolipid α-galactosylceramide. Here, we demonstrate that a protein antigen, chicken ovalbumin (Ova), must be processed to generate peptides presented by mCD1 to CD8+ T cells. The processing and mCD1-mediated presentation of chicken Ova depend on endosomal localization because inhibitors of endosomal acidification and endosomal recycling pathways block T cell reactivity. Furthermore, a cytoplasmic tail mutant of mCD1, which disrupts endosomal localization, has a greatly reduced capacity to present Ova to mCD1 restricted cells. Newly synthesized mCD1 molecules, however, are not required for Ova presentation, suggesting that molecules recycling from the cell surface are needed. Because of these data showing that mCD1 trafficks to endosomes, where it can bind peptides derived from exogenous proteins, we conclude that peptide antigen presentation by mCD1 is likely to be a naturally occurring phenomenon. In competition assays, α-galactosylceramide did not inhibit Ova presentation, and presentation of the glycolipid was not inhibited by excess Ova or the peptide epitope derived from it. This suggests that, although both lipid and peptide presentation may occur naturally, mCD1 may interact differently with these two types of antigens.
Resumo:
The functional specialization and hierarchical organization of multiple areas in rhesus monkey auditory cortex were examined with various types of complex sounds. Neurons in the lateral belt areas of the superior temporal gyrus were tuned to the best center frequency and bandwidth of band-passed noise bursts. They were also selective for the rate and direction of linear frequency modulated sweeps. Many neurons showed a preference for a limited number of species-specific vocalizations (“monkey calls”). These response selectivities can be explained by nonlinear spectral and temporal integration mechanisms. In a separate series of experiments, monkey calls were presented at different spatial locations, and the tuning of lateral belt neurons to monkey calls and spatial location was determined. Of the three belt areas the anterolateral area shows the highest degree of specificity for monkey calls, whereas neurons in the caudolateral area display the greatest spatial selectivity. We conclude that the cortical auditory system of primates is divided into at least two processing streams, a spatial stream that originates in the caudal part of the superior temporal gyrus and projects to the parietal cortex, and a pattern or object stream originating in the more anterior portions of the lateral belt. A similar division of labor can be seen in human auditory cortex by using functional neuroimaging.
Resumo:
The MKC7 gene was isolated as a multicopy suppressor of the cold-sensitive growth phenotype of a yeast kex2 mutant, which lacks the protease that cleaves pro-alpha-factor and other secretory proproteins at pairs of basic residues in a late Golgi compartment in yeast. MKC7 encodes an aspartyl protease most closely related to product of the YAP3 gene, a previously isolated multicopy suppressor of the pro-alpha-factor processing defect of a kex2 null. Multicopy MKC7 suppressed the alpha-specific mating defect of a kex2 null as well as multicopy YAP3 did, but multicopy YAP3 was a relatively weak suppressor of kex2 cold sensitivity. Overexpression of MKC7 resulted in production of a membrane-associated proteolytic activity that cleaved an internally quenched fluorogenic peptide substrate on the carboxyl side of a Lys-Arg site. Treatment with phosphatidylinositol-specific phospholipase C shifted Mkc7 activity from the detergent to the aqueous phase in a Triton X-114 phase separation, indicating that membrane attachment of Mkc7 is mediated by a glycosyl-phosphatidylinositol anchor. Although disruption of MKC7 or YAP3 alone resulted in no observable phenotype, mkc7 yap3 double disruptants exhibited impaired growth at 37 degrees C. Disruption of MKC7 and YAP3 in a kex2 null mutant resulted in profound temperature sensitivity and more generalized cold sensitivity. The synergism of mkc7, yap3, and kex2 null mutations argues that Mkc7 and Yap3 are authentic processing enzymes whose functions overlap those of Kex2 in vivo.
Resumo:
The primary purpose of the paper is to analyze the conditional correlations, conditional covariances, and co-volatility spillovers between international crude oil and associated financial markets. The paper investigates co-volatility spillovers (namely, the delayed effect of a returns shock in one physical or financial asset on the subsequent volatility or co-volatility in another physical or financial asset) between the oil and financial markets. The oil industry has four major regions, namely North Sea, USA, Middle East, and South-East Asia. Associated with these regions are two major financial centers, namely UK and USA. For these reasons, the data to be used are the returns on alternative crude oil markets, returns on crude oil derivatives, specifically futures, and stock index returns in UK and USA. The paper will also analyze the Chinese financial markets, where the data are more recent. The empirical analysis will be based on the diagonal BEKK model, from which the conditional covariances will be used for testing co-volatility spillovers, and policy recommendations. Based on these results, dynamic hedging strategies will be suggested to analyze market fluctuations in crude oil prices and associated financial markets.
Resumo:
We have measured experimental adsorption isotherms of water in zeolite LTA4A, and studied the regeneration process by performing subsequent adsorption cycles after degassing at different temperatures. We observed incomplete desorption at low temperatures, and cation rearrangement at successive adsorption cycles. We also developed a new molecular simulation force field able to reproduce experimental adsorption isotherms in the range of temperatures between 273 K and 374 K. Small deviations observed at high pressures are attributed to the change in the water dipole moment at high loadings. The force field correctly describes the preferential adsorption sites of water at different pressures. We tested the influence of the zeolite structure, framework flexibility, and cation mobility when considering adsorption and diffusion of water. Finally, we performed checks on force field transferability between different hydrophilic zeolite types, concluding that classical, non-polarizable water force fields are not transferable.
Resumo:
Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.