829 resultados para Mobile Robots Dynamic and Kinematic Modelling and Simulation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamical systems theory is used here as a theoretical language and tool to design a distributed control architecture for a team of two mobile robots that must transport a long object and simultaneously avoid obstacles. In this approach the level of modeling is at the level of behaviors. A “dynamics” of behavior is defined over a state space of behavioral variables (heading direction and path velocity). The environment is also modeled in these terms by representing task constraints as attractors (i.e. asymptotically stable states) or reppelers (i.e. unstable states) of behavioral dynamics. For each robot attractors and repellers are combined into a vector field that governs the behavior. The resulting dynamical systems that generate the behavior of the robots may be nonlinear. By design the systems are tuned so that the behavioral variables are always very close to one attractor. Thus the behavior of each robot is controled by a time series of asymptotically stable states. Computer simulations support the validity of our dynamic model architectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pultrusion is an industrial process used to produce glass fibers reinforced polymers profiles. These materials are worldwide used when performing characteristics, such as great electrical and magnetic insulation, high strength to weight ratio, corrosion and weather resistance, long service life and minimal maintenance are required. In this study, we present the results of the modelling and simulation of heat flow through a pultrusion die by means of Finite Element Analysis (FEA). The numerical simulation was calibrated based on temperature profiles computed from thermographic measurements carried out during pultrusion manufacturing process. Obtained results have shown a maximum deviation of 7%, which is considered to be acceptable for this type of analysis, and is below to the 10% value, previously specified as maximum deviation. © 2011, Advanced Engineering Solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital Businesses have become a major driver for economic growth and have seen an explosion of new startups. At the same time, it also includes mature enterprises that have become global giants in a relatively short period of time. Digital Businesses have unique characteristics that make the running and management of a Digital Business much different from traditional offline businesses. Digital businesses respond to online users who are highly interconnected and networked. This enables a rapid flow of word of mouth, at a pace far greater than ever envisioned when dealing with traditional products and services. The relatively low cost of incremental user addition has led to a variety of innovation in pricing of digital products, including various forms of free and freemium pricing models. This thesis explores the unique characteristics and complexities of Digital Businesses and its implications on the design of Digital Business Models and Revenue Models. The thesis proposes an Agent Based Modeling Framework that can be used to develop Simulation Models that simulate the complex dynamics of Digital Businesses and the user interactions between users of a digital product. Such Simulation models can be used for a variety of purposes such as simple forecasting, analysing the impact of market disturbances, analysing the impact of changes in pricing models and optimising the pricing for maximum revenue generation or a balance between growth in usage and revenue generation. These models can be developed for a mature enterprise with a large historical record of user growth rate as well as for early stage enterprises without much historical data. Through three case studies, the thesis demonstrates the applicability of the Framework and its potential applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work intends to present a newly developed test setup for dynamic out-of-plane loading using underWater Blast Wave Generators (WBWG) as loading source. Underwater blasting operations have been, during the last decades, subject of research and development of maritime blasting operations (including torpedo studies), aquarium tests for the measurement of blasting energy of industrial explosives and confined underwater blast wave generators. WBWG allow a wide range for the produced blast impulse and surface area distribution. It also avoids the generation of high velocity fragments and reduces atmospheric sound wave. A first objective of this work is to study the behavior of masonry infill walls subjected to blast loading. Three different masonry walls are to be studied, namely unreinforced masonry infill walls and two different reinforcement solutions. These solutions have been studied previously for seismic action mitigation. Subsequently, the walls will be simulated using an explicit finite element code for validation and parametric studies. Finally, a tool to help designers to make informed decisions on the use of infills under blast loading will be presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: In a simulation based on a pharmacokinetic model we demonstrated that increasing the erythropoiesis stimulating agents (ESAs) half-life or shortening their administration interval decreases hemoglobin variability. The benefit of reducing the administration interval was however lessened by the variability induced by more frequent dosage adjustments. The purpose of this study was to analyze the reticulocyte and hemoglobin kinetics and variability under different ESAs and administration intervals in a collective of chronic hemodialysis patients. METHODS: The study was designed as an open-label, randomized, four-period cross-over investigation, including 30 patients under chronic hemodialysis at the regional hospital of Locarno (Switzerland) in February 2010 and lasting 2 years. Four subcutaneous treatment strategies (C.E.R.A. every 4 weeks Q4W and every 2 weeks Q2W, Darbepoetin alfa Q4W and Q2W) were compared with each other. The mean square successive difference of hemoglobin, reticulocyte count and ESAs dose was used to quantify variability. We distinguished a short- and a long-term variability based respectively on the weekly and monthly successive difference. RESULTS: No difference was found in the mean values of biological parameters (hemoglobin, reticulocytes, and ferritin) between the 4 strategies. ESAs type did not affect hemoglobin and reticulocyte variability, but C.E.R.A induced a more sustained reticulocytes response over time and increased the risk of hemoglobin overshooting (OR 2.7, p = 0.01). Shortening the administration interval lessened the amplitude of reticulocyte count fluctuations but resulted in more frequent ESAs dose adjustments and in amplified reticulocyte and hemoglobin variability. Q2W administration interval was however more favorable in terms of ESAs dose, allowing a 38% C.E.R.A. dose reduction, and no increase of Darbepoetin alfa. CONCLUSIONS: The reticulocyte dynamic was a more sensitive marker of time instability of the hemoglobin response under ESAs therapy. The ESAs administration interval had a greater impact on hemoglobin variability than the ESAs type. The more protracted reticulocyte response induced by C.E.R.A. could explain both, the observed higher risk of overshoot and the significant increase in efficacy when shortening its administration interval.Trial registrationClinicalTrials.gov NCT01666301.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän tutkimustyön kohteena on TietoEnator Oy:n kehittämän Fenix-tietojärjestelmän kapasiteettitarpeen ennustaminen. Työn tavoitteena on tutustua Fenix-järjestelmän eri osa-alueisiin, löytää tapa eritellä ja mallintaa eri osa-alueiden vaikutus järjestelmän kuormitukseen ja selvittää alustavasti mitkä parametrit vaikuttavat kyseisten osa-alueiden luomaan kuormitukseen. Osa tätä työtä on tutkia eri vaihtoehtoja simuloinnille ja selvittää eri vaihtoehtojen soveltuvuus monimutkaisten järjestelmien mallintamiseen. Kerätyn tiedon pohjaltaluodaan järjestelmäntietovaraston kuormitusta kuvaava simulaatiomalli. Hyödyntämällä mallista saatua tietoa ja tuotantojärjestelmästä mitattua tietoa mallia kehitetään vastaamaan yhä lähemmin todellisen järjestelmän toimintaa. Mallista tarkastellaan esimerkiksi simuloitua järjestelmäkuormaa ja jonojen käyttäytymistä. Tuotantojärjestelmästä mitataan eri kuormalähteiden käytösmuutoksia esimerkiksi käyttäjämäärän ja kellonajan suhteessa. Tämän työn tulosten on tarkoitus toimia pohjana myöhemmin tehtävälle jatkotutkimukselle, jossa osa-alueiden parametrisointia tarkennetaan lisää, mallin kykyä kuvata todellista järjestelmää tehostetaanja mallin laajuutta kasvatetaan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study puts forward a method to model and simulate the complex system of hospital on the basis of multi-agent technology. The formation of the agents of hospitals with intelligent and coordinative characteristics was designed, the message object was defined, and the model operating mechanism of autonomous activities and coordination mechanism was also designed. In addition, the Ontology library and Norm library etc. were introduced using semiotic method and theory, to enlarge the method of system modelling. Swarm was used to develop the multi-agent based simulation system, which is favorable for making guidelines for hospital's improving it's organization and management, optimizing the working procedure, improving the quality of medical care as well as reducing medical charge costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have incorporated a semi-mechanistic isoprene emission module into the JULES land-surface scheme, as a first step towards a modelling tool that can be applied for studies of vegetation – atmospheric chemistry interactions, including chemistry-climate feedbacks. Here, we evaluate the coupled model against local above-canopy isoprene emission flux measurements from six flux tower sites as well as satellite-derived estimates of isoprene emission over tropical South America and east and south Asia. The model simulates diurnal variability well: correlation coefficients are significant (at the 95 % level) for all flux tower sites. The model reproduces day-to-day variability with significant correlations (at the 95 % confidence level) at four of the six flux tower sites. At the UMBS site, a complete set of seasonal observations is available for two years (2000 and 2002). The model reproduces the seasonal pattern of emission during 2002, but does less well in the year 2000. The model overestimates observed emissions at all sites, which is partially because it does not include isoprene loss through the canopy. Comparison with the satellite-derived isoprene-emission estimates suggests that the model simulates the main spatial patterns, seasonal and inter-annual variability over tropical regions. The model yields a global annual isoprene emission of 535 ± 9 TgC yr−1 during the 1990s, 78 % of which from forested areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research activity described in this thesis is focused mainly on the study of finite-element techniques applied to thermo-fluid dynamic problems of plant components and on the study of dynamic simulation techniques applied to integrated building design in order to enhance the energy performance of the building. The first part of this doctorate thesis is a broad dissertation on second law analysis of thermodynamic processes with the purpose of including the issue of the energy efficiency of buildings within a wider cultural context which is usually not considered by professionals in the energy sector. In particular, the first chapter includes, a rigorous scheme for the deduction of the expressions for molar exergy and molar flow exergy of pure chemical fuels. The study shows that molar exergy and molar flow exergy coincide when the temperature and pressure of the fuel are equal to those of the environment in which the combustion reaction takes place. A simple method to determine the Gibbs free energy for non-standard values of the temperature and pressure of the environment is then clarified. For hydrogen, carbon dioxide, and several hydrocarbons, the dependence of the molar exergy on the temperature and relative humidity of the environment is reported, together with an evaluation of molar exergy and molar flow exergy when the temperature and pressure of the fuel are different from those of the environment. As an application of second law analysis, a comparison of the thermodynamic efficiency of a condensing boiler and of a heat pump is also reported. The second chapter presents a study of borehole heat exchangers, that is, a polyethylene piping network buried in the soil which allows a ground-coupled heat pump to exchange heat with the ground. After a brief overview of low-enthalpy geothermal plants, an apparatus designed and assembled by the author to carry out thermal response tests is presented. Data obtained by means of in situ thermal response tests are reported and evaluated by means of a finite-element simulation method, implemented through the software package COMSOL Multyphysics. The simulation method allows the determination of the precise value of the effective thermal properties of the ground and of the grout, which are essential for the design of borehole heat exchangers. In addition to the study of a single plant component, namely the borehole heat exchanger, in the third chapter is presented a thorough process for the plant design of a zero carbon building complex. The plant is composed of: 1) a ground-coupled heat pump system for space heating and cooling, with electricity supplied by photovoltaic solar collectors; 2) air dehumidifiers; 3) thermal solar collectors to match 70% of domestic hot water energy use, and a wood pellet boiler for the remaining domestic hot water energy use and for exceptional winter peaks. This chapter includes the design methodology adopted: 1) dynamic simulation of the building complex with the software package TRNSYS for evaluating the energy requirements of the building complex; 2) ground-coupled heat pumps modelled by means of TRNSYS; and 3) evaluation of the total length of the borehole heat exchanger by an iterative method developed by the author. An economic feasibility and an exergy analysis of the proposed plant, compared with two other plants, are reported. The exergy analysis was performed by considering the embodied energy of the components of each plant and the exergy loss during the functioning of the plants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Free radicals are present in cigarette smoke and can have a negative effect on human health by attacking lipids, nucleic acids, proteins and other biologically important species. However, because of the complexity of the tobacco smoke system and the dynamic nature of radicals, little is known about the identity of the radicals, and debate continues on the mechanisms by which those radicals are produced. In this study, acetyl radicals were trapped from the gas phase using 3-amino-2, 2, 5, 5- tetramethyl-proxyl (3AP) on solid support to form stable 3AP adducts for later analysis by high performance liquid chromatography (HPLC), mass spectrometry/tandem mass spectrometry (MS-MS/MS) and liquid chromatography- mass spectrometry (LC-MS). Simulations of acetyl radical generation were performed using Matlab and the Master Chemical Mechanism (MCM) programs. A range of 10- 150 nmol/cigarette of acetyl radical was measured from gas phase tobacco smoke of both commerial and research cigarettes under several different smoking conditions. More radicals were detected from the puff smoking method compared to continuous flow sampling. Approximately twice as many acetyl radicals were trapped when a GF/F particle filter was placed before the trapping zone. Computational simulations show that NO/NO2 reacts with isoprene, initiating chain reactions to produce a hydroxyl radical, which abstracts hydrogen from acetaldehyde to generate acetyl radical. With initial concentrations of NO, acetaldehyde, and isoprene in a real-world cigarette smoke scenario, these mechanisms can account for the full amount of acetyl radical detected experimentally. This study contributes to the overall understanding of the free radical generation in gas phase cigarette smoke.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile ad-hoc networks (MANETs) and wireless sensor networks (WSNs) have been attracting increasing attention for decades due to their broad civilian and military applications. Basically, a MANET or WSN is a network of nodes connected by wireless communication links. Due to the limited transmission range of the radio, many pairs of nodes in MANETs or WSNs may not be able to communicate directly, hence they need other intermediate nodes to forward packets for them. Routing in such types of networks is an important issue and it poses great challenges due to the dynamic nature of MANETs or WSNs. On the one hand, the open-air nature of wireless environments brings many difficulties when an efficient routing solution is required. The wireless channel is unreliable due to fading and interferences, which makes it impossible to maintain a quality path from a source node to a destination node. Additionally, node mobility aggravates network dynamics, which causes frequent topology changes and brings significant overheads for maintaining and recalculating paths. Furthermore, mobile devices and sensors are usually constrained by battery capacity, computing and communication resources, which impose limitations on the functionalities of routing protocols. On the other hand, the wireless medium possesses inherent unique characteristics, which can be exploited to enhance transmission reliability and routing performance. Opportunistic routing (OR) is one promising technique that takes advantage of the spatial diversity and broadcast nature of the wireless medium to improve packet forwarding reliability in multihop wireless communication. OR combats the unreliable wireless links by involving multiple neighboring nodes (forwarding candidates) to choose packet forwarders. In opportunistic routing, a source node does not require an end-to-end path to transmit packets. The packet forwarding decision is made hop-by-hop in a fully distributed fashion. Motivated by the deficiencies of existing opportunistic routing protocols in dynamic environments such as mobile ad-hoc networks or wireless sensor networks, this thesis proposes a novel context-aware adaptive opportunistic routing scheme. Our proposal selects packet forwarders by simultaneously exploiting multiple types of cross-layer context information of nodes and environments. Our approach significantly outperforms other routing protocols that rely solely on a single metric. The adaptivity feature of our proposal enables network nodes to adjust their behaviors at run-time according to network conditions. To accommodate the strict energy constraints in WSNs, this thesis integrates adaptive duty-cycling mechanism to opportunistic routing for wireless sensor nodes. Our approach dynamically adjusts the sleeping intervals of sensor nodes according to the monitored traffic load and the estimated energy consumption rate. Through the integration of duty cycling of sensor nodes and opportunistic routing, our protocol is able to provide a satisfactory balance between good routing performance and energy efficiency for WSNs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web surveys are becoming increasingly popular in survey research including stated preference surveys. Compared with face-to-face, telephone and mail surveys, web surveys may contain a different and new source of measurement error and bias: the type of device that respondents use to answer the survey questions. This is the first study that tests whether the use of mobile devices, tablets or smartphones, affects survey characteristics and stated preferences in a web-based choice experiment. The web survey on expanding renewable energy production in Germany was carried out with 3182 respondents, of which 12% used a mobile device. Propensity score matching is used to account for selection bias in the use of mobile devices for survey completion. We find that mobile device users spent more time than desktop/laptop users to answer the survey. Yet, desktop/laptop users and mobile device users do not differ in acquiescence tendency as an indicator of extreme response patterns. For mobile device users only, we find a negative correlation between screen size and interview length and a positive correlation between screen size and acquiescence tendency. In the choice experiment data, we do not find significant differences in the tendency to choose the status quo option and scale between both subsamples. However, some of the estimates of implicit prices differ, albeit not in a unidirectional fashion. Model results for mobile device users indicate a U-shaped relationship between error variance and screen size. Together, the results suggest that using mobile devices is not detrimental to survey quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanisms of growth of a circular void by plastic deformation were studied by means of molecular dynamics in two dimensions (2D). While previous molecular dynamics (MD) simulations in three dimensions (3D) have been limited to small voids (up to ≈10 nm in radius), this strategy allows us to study the behavior of voids of up to 100 nm in radius. MD simulations showed that plastic deformation was triggered by the nucleation of dislocations at the atomic steps of the void surface in the whole range of void sizes studied. The yield stress, defined as stress necessary to nucleate stable dislocations, decreased with temperature, but the void growth rate was not very sensitive to this parameter. Simulations under uniaxial tension, uniaxial deformation and biaxial deformation showed that the void growth rate increased very rapidly with multiaxiality but it did not depend on the initial void radius. These results were compared with previous 3D MD and 2D dislocation dynamics simulations to establish a map of mechanisms and size effects for plastic void growth in crystalline solids.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To foster ongoing international cooperation beyond ACES (APEC Cooperation for Earthquake Simulation) on the simulation of solid earth phenomena, agreement was reached to work towards establishment of a frontier international research institute for simulating the solid earth: iSERVO = International Solid Earth Research Virtual Observatory institute (http://www.iservo.edu.au). This paper outlines a key Australian contribution towards the iSERVO institute seed project, this is the construction of: (1) a typical intraplate fault system model using practical fault system data of South Australia (i.e., SA interacting fault model), which includes data management and editing, geometrical modeling and mesh generation; and (2) a finite-element based software tool, which is built on our long-term and ongoing effort to develop the R-minimum strategy based finite-element computational algorithm and software tool for modelling three-dimensional nonlinear frictional contact behavior between multiple deformable bodies with the arbitrarily-shaped contact element strategy. A numerical simulation of the SA fault system is carried out using this software tool to demonstrate its capability and our efforts towards seeding the iSERVO Institute.