17 resultados para orders of worth

em Universidad Politécnica de Madrid


Relevância:

90.00% 90.00%

Publicador:

Resumo:

An efficient approach for the simulation of ion scattering from solids is proposed. For every encountered atom, we take multiple samples of its thermal displacements among those which result in scattering with high probability to finally reach the detector. As a result, the detector is illuminated by intensive “showers,” where each event of detection must be weighted according to the actual probability of the atom displacement. The computational cost of such simulation is orders of magnitude lower than in the direct approach, and a comprehensive analysis of multiple and plural scattering effects becomes possible. We use this method for two purposes. First, the accuracy of the approximate approaches, developed mainly for ion-beam structural analysis, is verified. Second, the possibility to reproduce a wide class of experimental conditions is used to analyze some basic features of ion-solid collisions: the role of double violent collisions in low-energy ion scattering; the origin of the “surface peak” in scattering from amorphous samples; the low-energy tail in the energy spectra of scattered medium-energy ions due to plural scattering; and the degradation of blocking patterns in two-dimensional angular distributions with increasing depth of scattering. As an example of simulation for ions of MeV energies, we verify the time reversibility for channeling and blocking of 1-MeV protons in a W crystal. The possibilities of analysis that our approach offers may be very useful for various applications, in particular, for structural analysis with atomic resolution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

SiGe nanowires of different Ge atomic fractions up to 15% were grown and ex-situ n-type doped by diffusion from a solid source in contact with the sample. The phenomenon of dielectrophoresis was used to locate single nanowires between pairs of electrodes in order to carry out electrical measurements. The measured resistance of the as-grown nanowires is very high, but it decreases more than three orders of magnitude upon doping, indicating that the doping procedure used has been effective

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El hormigón es uno de los materiales de construcción más empleados en la actualidad debido a sus buenas prestaciones mecánicas, moldeabilidad y economía de obtención, entre otras ventajas. Es bien sabido que tiene una buena resistencia a compresión y una baja resistencia a tracción, por lo que se arma con barras de acero para formar el hormigón armado, material que se ha convertido por méritos propios en la solución constructiva más importante de nuestra época. A pesar de ser un material profusamente utilizado, hay aspectos del comportamiento del hormigón que todavía no son completamente conocidos, como es el caso de su respuesta ante los efectos de una explosión. Este es un campo de especial relevancia, debido a que los eventos, tanto intencionados como accidentales, en los que una estructura se ve sometida a una explosión son, por desgracia, relativamente frecuentes. La solicitación de una estructura ante una explosión se produce por el impacto sobre la misma de la onda de presión generada en la detonación. La aplicación de esta carga sobre la estructura es muy rápida y de muy corta duración. Este tipo de acciones se denominan cargas impulsivas, y pueden ser hasta cuatro órdenes de magnitud más rápidas que las cargas dinámicas impuestas por un terremoto. En consecuencia, no es de extrañar que sus efectos sobre las estructuras y sus materiales sean muy distintos que las que producen las cargas habitualmente consideradas en ingeniería. En la presente tesis doctoral se profundiza en el conocimiento del comportamiento material del hormigón sometido a explosiones. Para ello, es crucial contar con resultados experimentales de estructuras de hormigón sometidas a explosiones. Este tipo de resultados es difícil de encontrar en la literatura científica, ya que estos ensayos han sido tradicionalmente llevados a cabo en el ámbito militar y los resultados obtenidos no son de dominio público. Por otra parte, en las campañas experimentales con explosiones llevadas a cabo por instituciones civiles el elevado coste de acceso a explosivos y a campos de prueba adecuados no permite la realización de ensayos con un elevado número de muestras. Por este motivo, la dispersión experimental no es habitualmente controlada. Sin embargo, en elementos de hormigón armado sometidos a explosiones, la dispersión experimental es muy acusada, en primer lugar, por la propia heterogeneidad del hormigón, y en segundo, por la dificultad inherente a la realización de ensayos con explosiones, por motivos tales como dificultades en las condiciones de contorno, variabilidad del explosivo, o incluso cambios en las condiciones atmosféricas. Para paliar estos inconvenientes, en esta tesis doctoral se ha diseñado un novedoso dispositivo que permite ensayar hasta cuatro losas de hormigón bajo la misma detonación, lo que además de proporcionar un número de muestras estadísticamente representativo, supone un importante ahorro de costes. Con este dispositivo se han ensayado 28 losas de hormigón, tanto armadas como en masa, de dos dosificaciones distintas. Pero además de contar con datos experimentales, también es importante disponer de herramientas de cálculo para el análisis y diseño de estructuras sometidas a explosiones. Aunque existen diversos métodos analíticos, hoy por hoy las técnicas de simulación numérica suponen la alternativa más avanzada y versátil para el cálculo de elementos estructurales sometidos a cargas impulsivas. Sin embargo, para obtener resultados fiables es crucial contar con modelos constitutivos de material que tengan en cuenta los parámetros que gobiernan el comportamiento para el caso de carga en estudio. En este sentido, cabe destacar que la mayoría de los modelos constitutivos desarrollados para el hormigón a altas velocidades de deformación proceden del ámbito balístico, donde dominan las grandes tensiones de compresión en el entorno local de la zona afectada por el impacto. En el caso de los elementos de hormigón sometidos a explosiones, las tensiones de compresión son mucho más moderadas, siendo las tensiones de tracción generalmente las causantes de la rotura del material. En esta tesis doctoral se analiza la validez de algunos de los modelos disponibles, confirmando que los parámetros que gobiernan el fallo de las losas de hormigón armado ante explosiones son la resistencia a tracción y su ablandamiento tras rotura. En base a los resultados anteriores se ha desarrollado un modelo constitutivo para el hormigón ante altas velocidades de deformación, que sólo tiene en cuenta la rotura por tracción. Este modelo parte del de fisura cohesiva embebida con discontinuidad fuerte, desarrollado por Planas y Sancho, que ha demostrado su capacidad en la predicción de la rotura a tracción de elementos de hormigón en masa. El modelo ha sido modificado para su implementación en el programa comercial de integración explícita LS-DYNA, utilizando elementos finitos hexaédricos e incorporando la dependencia de la velocidad de deformación para permitir su utilización en el ámbito dinámico. El modelo es estrictamente local y no requiere de remallado ni conocer previamente la trayectoria de la fisura. Este modelo constitutivo ha sido utilizado para simular dos campañas experimentales, probando la hipótesis de que el fallo de elementos de hormigón ante explosiones está gobernado por el comportamiento a tracción, siendo de especial relevancia el ablandamiento del hormigón. Concrete is nowadays one of the most widely used building materials because of its good mechanical properties, moldability and production economy, among other advantages. As it is known, it has high compressive and low tensile strengths and for this reason it is reinforced with steel bars to form reinforced concrete, a material that has become the most important constructive solution of our time. Despite being such a widely used material, there are some aspects of concrete performance that are not yet fully understood, as it is the case of its response to the effects of an explosion. This is a topic of particular relevance because the events, both intentional and accidental, in which a structure is subjected to an explosion are, unfortunately, relatively common. The loading of a structure due to an explosive event occurs due to the impact of the pressure shock wave generated in the detonation. The application of this load on the structure is very fast and of very short duration. Such actions are called impulsive loads, and can be up to four orders of magnitude faster than the dynamic loads imposed by an earthquake. Consequently, it is not surprising that their effects on structures and materials are very different than those that cause the loads usually considered in engineering. This thesis broadens the knowledge about the material behavior of concrete subjected to explosions. To that end, it is crucial to have experimental results of concrete structures subjected to explosions. These types of results are difficult to find in the scientific literature, as these tests have traditionally been carried out by armies of different countries and the results obtained are classified. Moreover, in experimental campaigns with explosives conducted by civil institutions the high cost of accessing explosives and the lack of proper test fields does not allow for the testing of a large number of samples. For this reason, the experimental scatter is usually not controlled. However, in reinforced concrete elements subjected to explosions the experimental dispersion is very pronounced. First, due to the heterogeneity of concrete, and secondly, because of the difficulty inherent to testing with explosions, for reasons such as difficulties in the boundary conditions, variability of the explosive, or even atmospheric changes. To overcome these drawbacks, in this thesis we have designed a novel device that allows for testing up to four concrete slabs under the same detonation, which apart from providing a statistically representative number of samples, represents a significant saving in costs. A number of 28 slabs were tested using this device. The slabs were both reinforced and plain concrete, and two different concrete mixes were used. Besides having experimental data, it is also important to have computational tools for the analysis and design of structures subjected to explosions. Despite the existence of several analytical methods, numerical simulation techniques nowadays represent the most advanced and versatile alternative for the assessment of structural elements subjected to impulsive loading. However, to obtain reliable results it is crucial to have material constitutive models that take into account the parameters that govern the behavior for the load case under study. In this regard it is noteworthy that most of the developed constitutive models for concrete at high strain rates arise from the ballistic field, dominated by large compressive stresses in the local environment of the area affected by the impact. In the case of concrete elements subjected to an explosion, the compressive stresses are much more moderate, while tensile stresses usually cause material failure. This thesis discusses the validity of some of the available models, confirming that the parameters governing the failure of reinforced concrete slabs subjected to blast are the tensile strength and softening behaviour after failure. Based on these results we have developed a constitutive model for concrete at high strain rates, which only takes into account the ultimate tensile strength. This model is based on the embedded Cohesive Crack Model with Strong Discontinuity Approach developed by Planas and Sancho, which has proved its ability in predicting the tensile fracture of plain concrete elements. The model has been modified for its implementation in the commercial explicit integration program LS-DYNA, using hexahedral finite elements and incorporating the dependence of the strain rate, to allow for its use in dynamic domain. The model is strictly local and does not require remeshing nor prior knowledge of the crack path. This constitutive model has been used to simulate two experimental campaigns, confirming the hypothesis that the failure of concrete elements subjected to explosions is governed by their tensile response, being of particular relevance the softening behavior of concrete.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Complex networks have been extensively used in the last decade to characterize and analyze complex systems, and they have been recently proposed as a novel instrument for the analysis of spectra extracted from biological samples. Yet, the high number of measurements composing spectra, and the consequent high computational cost, make a direct network analysis unfeasible. We here present a comparative analysis of three customary feature selection algorithms, including the binning of spectral data and the use of information theory metrics. Such algorithms are compared by assessing the score obtained in a classification task, where healthy subjects and people suffering from different types of cancers should be discriminated. Results indicate that a feature selection strategy based on Mutual Information outperforms the more classical data binning, while allowing a reduction of the dimensionality of the data set in two orders of magnitude

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The determination of the plasma potential Vpl of unmagnetized plasmas by using the floating potential of emissive Langmuir probes operated in the strong emission regime is investigated. The experiments evidence that, for most cases, the electron thermionic emission is orders of magnitude larger than the plasma thermal electron current. The temperature-dependent floating potentials of negatively biased Vpmenor queVpl emissive probes are in agreement with the predictions of a simple phenomenological model that considers, in addition to the plasma electrons, an ad-ditional electron group that contributes to the probe current. The latter would be constituted by a fraction of the repelled electron thermionic current, which might return back to the probe with a different energy spectrum. Its origin would be a plasma potential well formed in the plasma sheath around the probe, acting as a virtual cathode or by collisions and electron thermalization pro-cesses. These results suggest that, for probe bias voltages close to the plasma potential Vp?Vpl, two electron populations coexist, i.e., the electrons from the plasma with temperatureTeand a large group of returned thermionic electrons. These results question the theoretical possibility of measuring the electron temperature by using emissive probes biased to potentials Vp about lower equal than ?Vpl.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An analytical expression is derived for the electron thermionic current from heated metals by using a non equilibrium, modified Kappa energy distribution for electrons. This isotropic distribution characterizes the long high energy tails in the electron energy spectrum for low values of the index ? and also accounts for the Fermi energy for the metal electrons. The limit for large ? recovers the classical equilibrium Fermi-Dirac distribution. The predicted electron thermionic current for low ? increases between four and five orders of magnitude with respect to the predictions of the equilibrium Richardson-Dushmann current. The observed departures from this classical expression, also recovered for large ?, would correspond to moderate values of this index. The strong increments predicted by the thermionic emission currents suggest that, under appropriate conditions, materials with non equilibrium electron populations would become more efficient electron emitters at low temperatures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The current space environment, consisting of manmade debris and micrometeoroids, poses a risk to safe operations in space, and the situation is continuously deteriorating due to in-orbit debris collisions and to new satellite launches. Bare electrodynamic tethers can provide an efficient mechanism for rapid deorbiting of satellites from low Earth orbit at end of life. Because of its particular geometry (length very much larger than cross-sectional dimensions), a tether may have a relatively high risk of being severed by the single impact of small debris. The rates of fatal impact of orbital debris on round and tape tethers of equal length and mass, evaluated with an analytical approximation to debris flux modeled by NASA’s ORDEM2000, shows much higher survival probability for tapes. A comparative numerical analysis using debris flux model ORDEM2000 and ESA’s MASTER2005 validates the analytical result and shows that, for a given time in orbit, a tape has a probability of survival of about one and a half orders of magnitude higher than a round tether of equal mass and length. Because deorbiting from a given altitude is much faster for the tape due to its larger perimeter, its probability of survival in a practical sense is quite high.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cracking of reinforced concrete can occur in certain environments due to rebar corrosion. The oxide layer growing around the bars introduces a pressure which may be enough to lead to the fracture of concrete. To study such an effect, the results of accelerated corrosion tests and finite ele- ment simulations are combined in this work. In previous works, a numerical model for the expansive layer, called expansive joint element , was programmed by the authors to reproduce the effect of the oxide over the concrete. In that model, the expansion of the oxide layer in stress free conditions is simulated as an uniform expansion perpendicular to the steel surface. The cracking of concrete is simulated by means of finite elements with an embedded adaptable cohesive crack that follow the standard cohesive model. In the present work, further accelerated tests with imposed constant cur- rent have been carried out on the same type of specimens tested in previous works (with an embedded steel tube), while measuring, among other things, the main-crack mouth opening. Then, the tests have been numerically simulated using the expansive joint element and the tube as the corroding electrode (rather than a bar). As a result of the comparison of numerical and experimental results, both for the crack mouth opening and the crack pattern, new insight is gained into the behavior of the oxide layer. In particular, quantitative assessment of the oxide expansion relation is deduced from the ex- periments, and a narrower interval for the shear stiffness of the oxide layer is obtained, which could not be achieved using bars as the corroding element, because in that case the numerical results were insensitive to the shear stiffness of the oxide layer within many orders of magnitude

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study, we present a structural and optoelectronic characterization of high dose Ti implanted Si subsequently pulsed-laser melted (Ti supersaturated Si). Time-of-flight secondary ion mass spectrometry analysis reveals that the theoretical Mott limit has been surpassed after the laser process and transmission electron microscopy images show a good lattice reconstruction. Optical characterization shows strong sub-band gap absorption related to the high Ti concentration. Photoconductivity measurements show that Ti supersaturated Si presents spectral response orders of magnitude higher than unimplanted Si at energies below the band gap. We conclude that the observed below band gap photoconductivity cannot be attributed to structural defects produced by the fabrication processes and suggest that both absorption coefficient of the new material and lifetime of photoexcited carriers have been enhanced due to the presence of a high Ti concentration. This remarkable result proves that Ti supersaturated Si is a promising material for both infrared detectors and high efficiency photovoltaic devices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The contraction of the actomyosin cytoskeleton, which is produced by the sliding of myosin II along actin filaments, drives important cellular activities such as cytokinesis and cell migration. To explain the contraction velocities observed in such physiological processes, we have studied the contraction of intact cytoskeletons of Dictyostelium discoideum cells after removing the plasma membrane using Triton X-100. The technique developed in this work allows for the quantitative measurement of contraction rates of individual cytoskeletons. The relationship of the contraction rates with forces was analyzed using three different myosins with different in vitro sliding velocities. The cytoskeletons containing these myosins were always contractile and the contraction rate was correlated with the sliding velocity of the myosins. However, the values of the contraction rate were two to three orders of magnitude slower than expected from the in vitro sliding velocities of the myosins, presumably due to internal and external resistive forces. The contraction process also depended on actin cross-linking proteins. The lack of α-actinin increased the contraction rate 2-fold and reduced the capacity of the cytoskeleton to retain internal materials, while the lack of filamin resulted in the ATP-dependent disruption of the cytoskeleton. Interestingly, the myosin-dependent contraction rate of intact contractile rings is also reportedly much slower than the in vitro sliding velocity of myosin, and is similar to the contraction rates of cytoskeletons (different by only 2–3 fold), suggesting that the contraction of intact cells and cytoskeletons is limited by common mechanisms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Colombia is one of the largest per capita mercury polluters in the world as a consequence of its artisanal gold mining activities. The severity of this problem in terms of potential health effects was evaluated by means of a probabilistic risk assessment carried out in the twelve departments (or provinces) in Colombia with the largest gold production. The two exposure pathways included in the risk assessment were inhalation of elemental Hg vapors and ingestion of fish contaminated with methyl mercury. Exposure parameters for the adult population (especially rates of fish consumption) were obtained from nation-wide surveys and concentrations of Hg in air and of methyl-mercury in fish were gathered from previous scientific studies. Fish consumption varied between departments and ranged from 0 to 0.3 kg d?1. Average concentrations of total mercury in fish (70 data) ranged from 0.026 to 3.3 lg g?1. A total of 550 individual measurements of Hg in workshop air (ranging from menor queDL to 1 mg m?3) and 261 measurements of Hg in outdoor air (ranging from menor queDL to 0.652 mg m?3) were used to generate the probability distributions used as concentration terms in the calculation of risk. All but two of the distributions of Hazard Quotients (HQ) associated with ingestion of Hg-contaminated fish for the twelve regions evaluated presented median values higher than the threshold value of 1 and the 95th percentiles ranged from 4 to 90. In the case of exposure to Hg vapors, minimum values of HQ for the general population exceeded 1 in all the towns included in this study, and the HQs for miner-smelters burning the amalgam is two orders of magnitude higher, reaching values of 200 for the 95th percentile. Even acknowledging the conservative assumptions included in the risk assessment and the uncertainties associated with it, its results clearly reveal the exorbitant levels of risk endured not only by miner-smelters but also by the general population of artisanal gold mining communities in Colombia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fast ignition of inertial fusion targets driven by quasi-monoenergetic ion beams is investigated by means of numerical simulations. Light and intermediate ions such as lithium, carbon, aluminum and vanadium have been considered. Simulations show that the minimum ignition energies of an ideal configuration of compressed Deuterium-Tritium are almost independent on the ion atomic number. However, they are obtained for increasing ion energies, which scale, approximately, as Z2, where Z is the ion atomic number. Assuming that the ion beam can be focused into 10 ?m spots, a new irradiation scheme is proposed to reduce the ignition energies. The combination of intermediate Z ions, such as 5.5 GeV vanadium, and the new irradiation scheme allows a reduction of the number of ions required for ignition by, roughly, three orders of magnitude when compared with the standard proton fast ignition scheme.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optical filters are crucial elements in optical communications. The influence of cascaded filters in the optical signal will affect the communications quality seriously. In this paper we will study and simulate the optical signal impairment caused by different kinds of filters which include Butterworth, Bessel, Fiber Bragg Grating (FBG) and Fabry-Perot (FP). Optical signal impairment is analyzed from an Eye Opening Penalty (EOP) and optical spectrum point of view. The simulation results show that when the center frequency of all filters aligns with the laser’s frequency, the Butterworth has the smallest influence to the signal while the F-P has the biggest. With a -1dB EOP, the amount of cascaded Butterworth optical filters with a bandwidth of 50 GHz is 18 in 40 Gbps NRZ-DQPSK systems and 12 in 100 Gbps PMNRZ- DQPSK systems. The value is reduced to 9 and 6 respectively for Febry-Perot optical filters. In the situation of frequency misalignment, the impairment caused by filters is more serious. Our research shows that with a frequency deviation of 5 GHz, only 12 and 9 Butterworth optical filters can be cascaded in 40 Gbps NRZ-DQPSK and 100 Gbps PM-NRZ-DQPSK systems respectively. We also study the signal impairment caused by different orders of the Butterworth filter model. Our study shows that although the higher-order has a smaller clipping effect in the transmission spectrum, it will introduce a more serious phase ripple which seriously affects the signal. Simulation result shows that the 2nd order Butterworth filter has the best performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article the network configuration for fulfillment and distribution of online orders of two British retailers is analyzed and compared. For this purpose, it is proposed a conceptual framework that consists of the key following aspects: network configuration, transportation management and location of demand. As a result is not obvious to determine the ideal centralization degree in each case. Finally, it is suggested the future development of an analytic tool that helps to choose the most appropriate model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.