963 resultados para simulation result
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In der vorliegenden Arbeit werdenMolekulardynamik-Simulationen zur Untersuchung derstatischen Eigenschaften von amorphenSiliziumdioxidoberflächen (Siliziumdioxid) durchgeführt. Da das von van Beest, Kramer und van Santen vorgeschlagene,sogenannte BKS-Potential für Bulksysteme optimiert wurde und an Oberflächen deutlichandere Ladungsverteilungenauftreten als im Bulk, ist die Anwendbarkeit diesesPotentials für Oberflächensystemefraglich. Aus diesem Grund haben wir untersucht, inwieweitsich die Oberflächeneigenschaften von Systemen, die mit Hilfe des BKS-Potentials äquilibriertwurden, durch ein Nachrelaxieren mit einer ab-initio-Simulation (Car-Parrinello-Methode)ändern. Mit Hilfe der Kombination aus BKS- und Car-Parrinello-Methode (CPMD)konnten wir feststellen, daß sich die Systeme aufgrund des Nachrelaxierens in z-Richtungweiter ausdehnen. Desweiteren zeigte sich insbesondere bei kleinen Ringen (kommen nur ander Oberfläche vor), daß es deutliche Abweichungen in den Geometrien (Atomabstände,Winkel usw.) zwischen der reinen BKS- und der kombinierten BKS-CPMD-Methode gibt. Anhand vonCPMD-Simulationen konnten wir zeigen, daß es durch die Wechselwirkung eines Wassermolekülsmit einem 2er-Ring zum Aufbrechen dieser Ringstruktur und zur Bildung von zweiSilanolgruppen (SiOH) kommt. Desweiteren stellten wir fest, daß es sich hierbei um eineexotherme Reaktion (Energiedifferenz 1.6 eV) handelt, für die eineEnergiebarriere von 1.1 eV überwunden werden muß. Ferner ergab sich, daß die an der Bildung des2er-Ringes beteiligten, stark deformierten Tetraeder nach dem Aufbrechen dieserRingstruktur eine nahezu ideale Tetraederform annehmen.
Resumo:
This thesis presents new methods to simulate systems with hydrodynamic and electrostatic interactions. Part 1 is devoted to computer simulations of Brownian particles with hydrodynamic interactions. The main influence of the solvent on the dynamics of Brownian particles is that it mediates hydrodynamic interactions. In the method, this is simulated by numerical solution of the Navier--Stokes equation on a lattice. To this end, the Lattice--Boltzmann method is used, namely its D3Q19 version. This model is capable to simulate compressible flow. It gives us the advantage to treat dense systems, in particular away from thermal equilibrium. The Lattice--Boltzmann equation is coupled to the particles via a friction force. In addition to this force, acting on {it point} particles, we construct another coupling force, which comes from the pressure tensor. The coupling is purely local, i.~e. the algorithm scales linearly with the total number of particles. In order to be able to map the physical properties of the Lattice--Boltzmann fluid onto a Molecular Dynamics (MD) fluid, the case of an almost incompressible flow is considered. The Fluctuation--Dissipation theorem for the hybrid coupling is analyzed, and a geometric interpretation of the friction coefficient in terms of a Stokes radius is given. Part 2 is devoted to the simulation of charged particles. We present a novel method for obtaining Coulomb interactions as the potential of mean force between charges which are dynamically coupled to a local electromagnetic field. This algorithm scales linearly, too. We focus on the Molecular Dynamics version of the method and show that it is intimately related to the Car--Parrinello approach, while being equivalent to solving Maxwell's equations with freely adjustable speed of light. The Lagrangian formulation of the coupled particles--fields system is derived. The quasi--Hamiltonian dynamics of the system is studied in great detail. For implementation on the computer, the equations of motion are discretized with respect to both space and time. The discretization of the electromagnetic fields on a lattice, as well as the interpolation of the particle charges on the lattice is given. The algorithm is as local as possible: Only nearest neighbors sites of the lattice are interacting with a charged particle. Unphysical self--energies arise as a result of the lattice interpolation of charges, and are corrected by a subtraction scheme based on the exact lattice Green's function. The method allows easy parallelization using standard domain decomposition. Some benchmarking results of the algorithm are presented and discussed.
Resumo:
During the project, managers encounter numerous contingencies and are faced with the challenging task of making decisions that will effectively keep the project on track. This task is very challenging because construction projects are non-prototypical and the processes are irreversible. Therefore, it is critical to apply a methodological approach to develop a few alternative management decision strategies during the planning phase, which can be deployed to manage alternative scenarios resulting from expected and unexpected disruptions in the as-planned schedule. Such a methodology should have the following features but are missing in the existing research: (1) looking at the effects of local decisions on the global project outcomes, (2) studying how a schedule responds to decisions and disruptive events because the risk in a schedule is a function of the decisions made, (3) establishing a method to assess and improve the management decision strategies, and (4) developing project specific decision strategies because each construction project is unique and the lessons from a particular project cannot be easily applied to projects that have different contexts. The objective of this dissertation is to develop a schedule-based simulation framework to design, assess, and improve sequences of decisions for the execution stage. The contribution of this research is the introduction of applying decision strategies to manage a project and the establishment of iterative methodology to continuously assess and improve decision strategies and schedules. The project managers or schedulers can implement the methodology to develop and identify schedules accompanied by suitable decision strategies to manage a project at the planning stage. The developed methodology also lays the foundation for an algorithm towards continuously automatically generating satisfactory schedule and strategies through the construction life of a project. Different from studying isolated daily decisions, the proposed framework introduces the notion of {em decision strategies} to manage construction process. A decision strategy is a sequence of interdependent decisions determined by resource allocation policies such as labor, material, equipment, and space policies. The schedule-based simulation framework consists of two parts, experiment design and result assessment. The core of the experiment design is the establishment of an iterative method to test and improve decision strategies and schedules, which is based on the introduction of decision strategies and the development of a schedule-based simulation testbed. The simulation testbed used is Interactive Construction Decision Making Aid (ICDMA). ICDMA has an emulator to duplicate the construction process that has been previously developed and a random event generator that allows the decision-maker to respond to disruptions in the emulation. It is used to study how the schedule responds to these disruptions and the corresponding decisions made over the duration of the project while accounting for cascading impacts and dependencies between activities. The dissertation is organized into two parts. The first part presents the existing research, identifies the departure points of this work, and develops a schedule-based simulation framework to design, assess, and improve decision strategies. In the second part, the proposed schedule-based simulation framework is applied to investigate specific research problems.
Resumo:
This technical report discusses the application of the Lattice Boltzmann Method (LBM) and Cellular Automata (CA) simulation in fluid flow and particle deposition. The current work focuses on incompressible flow simulation passing cylinders, in which we incorporate the LBM D2Q9 and CA techniques to simulate the fluid flow and particle loading respectively. For the LBM part, the theories of boundary conditions are studied and verified using the Poiseuille flow test. For the CA part, several models regarding simulation of particles are explained. And a new Digital Differential Analyzer (DDA) algorithm is introduced to simulate particle motion in the Boolean model. The numerical results are compared with a previous probability velocity model by Masselot [Masselot 2000], which shows a satisfactory result.
Resumo:
The impact of the systematic variation of either DeltapK(a) or mobility of 140 biprotic carrier ampholytes on the conductivity profile of a pH 3-10 gradient was studied by dynamic computer simulation. A configuration with the greatest DeltapK(a) in the pH 6-7 range and uniform mobilities produced a conductivity profile consistent with that which is experimentally observed. A similar result was observed when the neutral (pI = 7) ampholyte is assigned the lowest mobility and mobilities of the other carriers are systematically increased as their pI's recede from 7. When equal DeltapK(a) values and mobilities are assigned to all ampholytes a conductivity plateau in the pH 5-9 region is produced which does not reflect what is seen experimentally. The variation in DeltapK(a) values is considered to most accurately reflect the electrochemical parameters of commercially available mixtures of carrier ampholytes. Simulations with unequal mobilities of the cationic and anionic species of the carrier ampholytes show either cathodic (greater mobility of the cationic species) or anodic (greater mobility of the anionic species) drifts of the pH gradient. The simulated cationic drifts compare well to those observed experimentally in a capillary in which the focusing of three dyes was followed by whole column optical imaging. The cathodic drift flattens the acidic portion of the gradient and steepens the basic part. This phenomenon is an additional argument against the notion that focused zones of carrier ampholytes have no electrophoretic flux.
Resumo:
The fatality risk caused by avalanches on road networks can be analysed using a long-term approach, resulting in a mean value of risk, and with emphasis on short-term fluctuations due to the temporal variability of both, the hazard potential and the damage potential. In this study, the approach for analysing the long-term fatality risk has been adapted by modelling the highly variable short-term risk. The emphasis was on the temporal variability of the damage potential and the related risk peaks. For defined hazard scenarios resulting from classified amounts of snow accumulation, the fatality risk was calculated by modelling the hazard potential and observing the traffic volume. The avalanche occurrence probability was calculated using a statistical relationship between new snow height and observed avalanche releases. The number of persons at risk was determined from the recorded traffic density. The method resulted in a value for the fatality risk within the observed time frame for the studied road segment. The long-term fatality risk due to snow avalanches as well as the short-term fatality risk was compared to the average fatality risk due to traffic accidents. The application of the method had shown that the long-term avalanche risk is lower than the fatality risk due to traffic accidents. The analyses of short-term avalanche-induced fatality risk provided risk peaks that were 50 times higher than the statistical accident risk. Apart from situations with high hazard level and high traffic density, risk peaks result from both, a high hazard level combined with a low traffic density and a high traffic density combined with a low hazard level. This provided evidence for the importance of the temporal variability of the damage potential for risk simulations on road networks. The assumed dependence of the risk calculation on the sum of precipitation within three days is a simplified model. Thus, further research is needed for an improved determination of the diurnal avalanche probability. Nevertheless, the presented approach may contribute as a conceptual step towards a risk-based decision-making in risk management.
Resumo:
An interim analysis is usually applied in later phase II or phase III trials to find convincing evidence of a significant treatment difference that may lead to trial termination at an earlier point than planned at the beginning. This can result in the saving of patient resources and shortening of drug development and approval time. In addition, ethics and economics are also the reasons to stop a trial earlier. In clinical trials of eyes, ears, knees, arms, kidneys, lungs, and other clustered treatments, data may include distribution-free random variables with matched and unmatched subjects in one study. It is important to properly include both subjects in the interim and the final analyses so that the maximum efficiency of statistical and clinical inferences can be obtained at different stages of the trials. So far, no publication has applied a statistical method for distribution-free data with matched and unmatched subjects in the interim analysis of clinical trials. In this simulation study, the hybrid statistic was used to estimate the empirical powers and the empirical type I errors among the simulated datasets with different sample sizes, different effect sizes, different correlation coefficients for matched pairs, and different data distributions, respectively, in the interim and final analysis with 4 different group sequential methods. Empirical powers and empirical type I errors were also compared to those estimated by using the meta-analysis t-test among the same simulated datasets. Results from this simulation study show that, compared to the meta-analysis t-test commonly used for data with normally distributed observations, the hybrid statistic has a greater power for data observed from normally, log-normally, and multinomially distributed random variables with matched and unmatched subjects and with outliers. Powers rose with the increase in sample size, effect size, and correlation coefficient for the matched pairs. In addition, lower type I errors were observed estimated by using the hybrid statistic, which indicates that this test is also conservative for data with outliers in the interim analysis of clinical trials.^
Resumo:
Multi-center clinical trials are very common in the development of new drugs and devices. One concern in such trials, is the effect of individual investigational sites enrolling small numbers of patients on the overall result. Can the presence of small centers cause an ineffective treatment to appear effective when treatment-by-center interaction is not statistically significant?^ In this research, simulations are used to study the effect that centers enrolling few patients may have on the analysis of clinical trial data. A multi-center clinical trial with 20 sites is simulated to investigate the effect of a new treatment in comparison to a placebo treatment. Twelve of these 20 investigational sites are considered small, each enrolling less than four patients per treatment group. Three clinical trials are simulated with sample sizes of 100, 170 and 300. The simulated data is generated with various characteristics, one in which treatment should be considered effective and another where treatment is not effective. Qualitative interactions are also produced within the small sites to further investigate the effect of small centers under various conditions.^ Standard analysis of variance methods and the "sometimes-pool" testing procedure are applied to the simulated data. One model investigates treatment and center effect and treatment-by-center interaction. Another model investigates treatment effect alone. These analyses are used to determine the power to detect treatment-by-center interactions, and the probability of type I error.^ We find it is difficult to detect treatment-by-center interactions when only a few investigational sites enrolling a limited number of patients participate in the interaction. However, we find no increased risk of type I error in these situations. In a pooled analysis, when the treatment is not effective, the probability of finding a significant treatment effect in the absence of significant treatment-by-center interaction is well within standard limits of type I error. ^
Resumo:
The Benguela Current, located off the west coast of southern Africa, is tied to a highly productive upwelling system**1. Over the past 12 million years, the current has cooled, and upwelling has intensified**2, 3, 4. These changes have been variously linked to atmospheric and oceanic changes associated with the glaciation of Antarctica and global cooling**5, the closure of the Central American Seaway**1, 6 or the further restriction of the Indonesian Seaway**3. The upwelling intensification also occurred during a period of substantial uplift of the African continent**7, 8. Here we use a coupled ocean-atmosphere general circulation model to test the effect of African uplift on Benguela upwelling. In our simulations, uplift in the East African Rift system and in southern and southwestern Africa induces an intensification of coastal low-level winds, which leads to increased oceanic upwelling of cool subsurface waters. We compare the effect of African uplift with the simulated impact of the Central American Seaway closure9, Indonesian Throughflow restriction10 and Antarctic glaciation**11, and find that African uplift has at least an equally strong influence as each of the three other factors. We therefore conclude that African uplift was an important factor in driving the cooling and strengthening of the Benguela Current and coastal upwelling during the late Miocene and Pliocene epochs.
Resumo:
An efficient approach for the simulation of ion scattering from solids is proposed. For every encountered atom, we take multiple samples of its thermal displacements among those which result in scattering with high probability to finally reach the detector. As a result, the detector is illuminated by intensive “showers,” where each event of detection must be weighted according to the actual probability of the atom displacement. The computational cost of such simulation is orders of magnitude lower than in the direct approach, and a comprehensive analysis of multiple and plural scattering effects becomes possible. We use this method for two purposes. First, the accuracy of the approximate approaches, developed mainly for ion-beam structural analysis, is verified. Second, the possibility to reproduce a wide class of experimental conditions is used to analyze some basic features of ion-solid collisions: the role of double violent collisions in low-energy ion scattering; the origin of the “surface peak” in scattering from amorphous samples; the low-energy tail in the energy spectra of scattered medium-energy ions due to plural scattering; and the degradation of blocking patterns in two-dimensional angular distributions with increasing depth of scattering. As an example of simulation for ions of MeV energies, we verify the time reversibility for channeling and blocking of 1-MeV protons in a W crystal. The possibilities of analysis that our approach offers may be very useful for various applications, in particular, for structural analysis with atomic resolution.
Resumo:
Purpose: In this work, we present the analysis, design and optimization of one experimental device recently developed in the UK, called the 'GP' Thrombus Aspiration Device (GPTAD). This device has been designed to remove blood clots without the need to make contact with the clot itself thereby potentially reducing the risk of problems such as downstream embolisation. Method: To obtain the minimum pressure necessary to extract the clot and to optimize the device, we have simulated the performance of the GPTAD analysing the resistances, compliances and inertances effects. We model a range of diameters for the GPTAD considering different forces of adhesion of the blood clot to the artery wall, and different lengths of blood clot. In each case we determine the optimum pressure required to extract the blood clot from the artery using the GPTAD, which is attached at its proximal end to a suction pump. Result: We then compare the results of our mathematical modelling to measurements made in laboratory using plastic tube models of arteries of comparable diameter. We use abattoir porcine blood clots that are extracted using the GPTAD. The suction pressures required for such clot extraction in the plastic tube models compare favourably with those predicted by the mathematical modelling. Discussion & Conclusion: We conclude therefore that the mathematical modelling is a useful technique in predicting the performance of the GPTAD and may potentially be used in optimising the design of the device.
Resumo:
The need for the simulation of spectrum compatible earthquake time histories has existed since earthquake engineering for complicated structures began. More than the safety of the main structure, the analysis of the equipment (piping, racks, etc.) can only be assessed on the basis of the time history of the floor in which they are contained. This paper presents several methods for calculating simulated spectrum compatible earthquakes as well as a comparison between them. As a result of this comparison, the use of the phase content in real earthquakes as proposed by Ohsaki appears as an effective alternative to the classical methods. With this method, it is possible to establish an approach without the arbitrary modulation commonly used in other methods. Different procedures are described as is the influence of the different parameters which appear in the analysis. Several numerical examples are also presented, and the effectiveness of Ohsaki's method is confirmed.
Resumo:
The primary hypothesis stated by this paper is that the use of social choice theory in Ambient Intelligence systems can improve significantly users satisfaction when accessing shared resources. A research methodology based on agent based social simulations is employed to support this hypothesis and to evaluate these benefits. The result is a six-fold contribution summarized as follows. Firstly, several considerable differences between this application case and the most prominent social choice application, political elections, have been found and described. Secondly, given these differences, a number of metrics to evaluate different voting systems in this scope have been proposed and formalized. Thirdly, given the presented application and the metrics proposed, the performance of a number of well known electoral systems is compared. Fourthly, as a result of the performance study, a novel voting algorithm capable of obtaining the best balance between the metrics reviewed is introduced. Fifthly, to improve the social welfare in the experiments, the voting methods are combined with cluster analysis techniques. Finally, the article is complemented by a free and open-source tool, VoteSim, which ensures not only the reproducibility of the experimental results presented, but also allows the interested reader to adapt the case study presented to different environments.
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.